Archivo de la categoría: AI

Apple gives Siri an AI facelift

Apple 1Apple has continued its journey into the world of artificial intelligence through the $200 million acquisition of machine learning start-up Turi, according to Geekwire.

The deal has not been explicitly confirmed by the team at Apple, though it does back up claims from CEO Tim Cook the company is extending its footprint into the growing sub-sector. Although Apple has not been the most prominent in the industry in terms of grabbing headlines, Google and IBM have been particularly vocal, a number of its products are built on the basic principles of artificial intelligence. Siri is a prime example though expanding its potential through the implementation of more advanced technologies offers the potential to improve the user experience.

Turi offers tools which enable developers to embed machine learning into applications, which automatically scale and tune. Use cases for the technology include product recommendations, sentiment analysis, churn prediction and lead scoring for trial customers.

The long-term plan for the business is not clear for the moment. Whether the tools will be made available for the Apple developer community, or remain in-house for the tech giant, or even if the company will remain in Seattle, are unknown as the acquisition still remains officially unconfirmed.

“These experiences become more powerful and intuitive as we continue our long history of enriching our products through advanced artificial intelligence,” said Cook on the company’s earnings call last month. “We have focused our AI efforts on the features that best enhance the customer experience.”

During the briefing, Cook highlighted the potential for Siri to not only understand words from the user, but also identify the sentiment. The acquisition of Turi could be a link between a relatively simplistic function currently, to one which can more effectively predict what the consumer wants and better refine search results.

“We’re also using machine learning in many other ways across our products and services, including recommending songs, apps, and news,” said Cook. “Machine learning is improving facial and image recognition in photos, predicting word choice while typing in messages and mail, and providing context awareness in maps for better directions.

“Deep learning within our products even enables them to recognize usage patterns and improve their own battery life. And most importantly, we deliver these intelligent services while protecting users’ privacy. Most of the AI processing takes place on the device rather than being sent to the cloud.”

Although less vocal than other industry players Apple has been expanding its capabilities through various acquisitions. Since the turn of 2015 the company has acquired 15 organizations, not including Turi for the moment, which does contain a number of machine learning competences. VocalIQ, a UK speech tech firm, and Perceptio, an image recognition company, were both bought in September last year, as well as facial recognition business Emotient in January.

The sluggish smartphone market has been causing challenges for manufacturers, driving the need to provide more differentiation. Hardware has provided little opportunity for brands to differentiate products and operating systems offer even less variance, meaning manufacturers have had to invest more in software solutions. Siri is already one of the more recognizable personal assistant features on the market, and the inclusion of an in-phone AI offering could bring about much needed differentiation.

Google grows (again) but ‘Other Bets’ cost the giant $1bn

GoogleGoogle has reported its Q2 numbers, continuing a strong run of performances within the technology industry, though efforts to diversify its overall business are not paying off just yet, reports Telecoms.com.

The Alphabet brand was announced last year, with aim of allowing the team to invest in other projects more freely, without being impeded by the advertising business. It would appear the management team are not afraid to throw R&D money at its innovation team as it searches for another billion-dollar business, as the ‘Other Bets’ segment, which includes Google Fibre and the autonomous cars projects, accounted for an operating loss of $859 million. Revenues did grow to $185 million, up 150% on the same quarter in 2015, though this number was made almost insignificant by the $19 billion generated in the advertising business.

The technology industry on the whole has been providing strong numbers over the last couple of weeks, though there has been a question as to whether two advertising giants can co-exist. With Facebook reporting significant growth yesterday, advertising revenues across the period increased 63% year-on-year to $6.2 billion, these numbers were dwarfed by Google, perhaps demonstrating there is potential for both organizations to share advertising revenues, which are decreasing in value, and grow healthily.

With regard to the dwindling value of advertising revenues, Google would appear to be combatting this with volume. CFO Ruth Porat highlighted the mobile search capabilities were the primary driver behind the year-on-year growth, though the desktop and tablet search did also grow.

Numbers such as these will grab headlines, meaning it can be easy to forget about the Google cloud business, one of the top priorities for the Alphabet business moving forward.

On the same day which AWS reported revenues of $2.9 billion for the quarter, Google’s cloud business also demonstrated solid growth. Although the numbers are not specific, the ‘Other’ revenues segment which includes the cloud business, and other services such as Google play, accounting for $2.1 billion through the three month period, an increase of 33% on Q2 2015.

“Many tremendous digital experiences are being built in the cloud today, and businesses are working to take advantage of the cloud as part of their digital transformation,” said Google CEO Sundar Pichai. “We’ve been integrating our cloud and apps products to create more unified solutions for companies large and small, and these efforts are paying off.”

Following on from Pichai’s previous comments on the role of artificial intelligence on the Google cloud platform, and the wider Google business, its importance has been reiterated once again. Machine learning is being prioritized as the differentiator for Google in a competitive technology market, and only last week the team introduced two cloud machine learning APIs for speech and natural language to help enterprise customers convert audio to text and easily understand the structure and sentiment of the text in a variety of languages.

In terms of footprint, the team are not done growing yet. At the end of last month, Google and friends completed work on a new trans-Pacific submarine cable system, which will help the team launch a new Google Cloud Platform East Asia region in Tokyo. Back in March, the team confirmed it would be investing heavily in expansion of its cloud footprint with 12 new data centres around the world by the end of 2017.

AWS has previously stated it intends to break the $10 billion barrier in cloud revenues during 2016, though Google may not be that far behind. With its history of not being afraid to invest, and the growth numbers which have been witnessed over the last few quarters, Google could be set to accelerate.

Microsoft continues cloud transformation with 100% Azure growth

Microsoft1Microsoft has reported 5% growth to $22.6 billion as the Intelligent Cloud business unit led the charge, with the Azure public cloud offering more than doubling in revenues and compute usage, reports Telecoms.com.

The Intelligent Cloud unit, which includes server products and cloud services, Azure and enterprise mobility offerings grew 7% to $6.7 billion, while the Productivity and Business Processes, which includes Office commercial and consumer product lines as well as the Dynamics suite, grew 5% to $7 billion. Despite revenues in More Personal Computing declining 4% to $8.9 billion, Xbox Live monthly active users grew 33% year-over-year to 49 million and search advertising revenue grew 16% over the period.

“We delivered $22.6 billion in revenue this quarter, an increase of 5% for the quarter in constant currency,” said Satya Nadella, CEO at Microsoft. “This past year was pivotal in both our own transformation and in partnering with our customers who are navigating their own digital transformations. The Microsoft Cloud is seeing significant customer momentum and we’re well positioned to reach new opportunities in the year ahead.”

Cloud computing has once again brought Microsoft to the forefront of the technology industry following a challenging couple of years. It would appear the transition from software to cloud computing brand is being successfully navigated, though there were a few missed steps along the way, most notably the team’s foray into mobile. Microsoft is moving towards the position of ‘mega-vendor’, infiltrating almost all aspects of an organization (cloud, hardware, social, databases etc.), to make it an indispensable factor of a CIOs roster.

The Intelligent Cloud unit continues as the focal point of the company’s growth strategy, as Nadella claims nearly 60% of the Fortune 500 companies use at least three of the company’s cloud offerings, generating more than $12 billion in Commercial Cloud annualized revenue run rate.

“Companies looking to digitally transform need a trusted cloud partner and turn to Microsoft,” said Nadella. “As a result, Azure revenue and usage again grew by more than 100% this quarter. We see customers choose Microsoft for three reasons. They want a cloud provider that offers solutions that reflect the realities of today’s world and their enterprise-grade needs. They want higher level services to drive digital transformation, and they want a cloud open to developers of all types.”

AI has previously been positioned as one of the cornerstones of growth for the company, and this was reinforced during the earnings call, as Nadella noted the component of the Intelligent Cloud business unit. The Cortana Intelligence Suite, formerly known as Cortana Analytics Suite, is built on the company’s on-going research into big data, machine learning, perception, analytics and intelligent bots. The offering allows developers to build apps and bots which interact with customers in a personalized way, but also react to real-world developments in real-time.

“Just yesterday, we announced Boeing will use Azure, our IoT suite, and Cortana Intelligence to drive digital transformation in commercial aviation, with connected airline systems optimization, predictive maintenance, and much more,” said Nadella. “This builds on great momentum in IoT. This is great progress, but our ambitions are set even higher. Our Intelligent Cloud also enables cognitive services. Cortana Intelligence Suite offers machine learning capabilities and advanced predictive analytics.

“Central to our Intelligent Cloud ambition is providing developers with the tools and capabilities they need to build apps and services for the platforms and devices of their choice. The new Azure Container service as well as .NET Core 1.0 for open source and our ongoing work with companies such as Red Hat, Docker, and Mesosphere reflects significant progress on this front. We continue to see traction from open source, with nearly a third of customer virtual machines on Azure running Linux.”

The company exceeded analyst expectations for the quarter, which was reflected in pre-market trading which saw shares in the giant growing 4%. In terms of outlook for the next quarter, most business units are expected to be down a fraction on the Q2 reported figures, unsurprising considering the summer period. Intelligent Cloud is expected to bring between $6.1-6.3 million, Productivity and Business Processes $6.4-6.6 billion, and More Personal Computing $8.7-9 billion.

IBM makes cloud progress but reports another quarterly decline

IBMIBM revenues continued to fall for a 17th consecutive quarter despite beating analyst expectations and demonstrating healthy growth in its cloud and data business units, reports Telecoms.com.

The company reported a drop in revenues for Q2 of 2.8% to $20.24 billion, though this was an improvement on analyst expectations of $20.03 billion, encouraging shares to rise 2.6% to $164 after hours. The business units which the company deems strategic imperatives, cloud, analytics and engagement, gained 12% year-on-year, though this wasn’t enough to counter the impact of legacy technologies on reported earnings which fell to $2.5 billion from $3.45 billion in 2015. Overall, revenues are now roughly 25% lower than the numbers reported in 2011.

“We continued to deliver double-digit revenue growth in our strategic imperatives,” said CFO Martin Schroeter on the company’s earnings call this week. “Over the last 12 months, strategic imperatives delivered $31 billion in revenue, and now represent 38% of IBM.

“Growth was led by cloud, where our revenue was up 30% to $3.4 billion in the quarter, and over $11.5 billion over the last year so good progress in cloud. Looking at revenue from a segment perspective, the strongest growth came from cognitive solutions led by our analytics and cognitive capabilities and security.”

Schroeter was keen to emphasise the impact Watson is having on the business, as the team continue its journey to redefine Big Blue in the age of cloud computing. Numerous customers were listed as wins for IBM in the cognitive computing sector, as IBM continues to champion Watson as a platform to bring together the digital business with digital intelligence to improve decision-making and add intelligence to products and processes. Watson will continue to be the jewel in the crown of Big Blue as the company moves towards the new digital era.

Despite revenues continuing to fall the team has made a number of positive launches throughout the quarter. Quantum computing is now available on the IBM cloud, the team launched a new partnership with Box to counter the impact of EU-US Privacy Shield on its international business, and an expanded partnership with VMWare expanded the reach of its security portfolio.

In terms of the specific segments, revenues in the cognitive team rose 4%, though this is down from 9% growth in the previous quarter, solutions software revenue was up 6% for the quarter, SAS was another area which recorded triple digit growth and Schroeter claims IBM’s security business outperformed the market by three times. The IBM interactive experience unit also demonstrated healthy growth, as the team continue its journey into an entirely new market for Big Blue.

“We have opened over 30 digital studios around the globe including new studios in Singapore and Seoul,” said Schroeter. “We also completed the acquisition of Aperto, a digital agency in Berlin with over 300 employees and a roster of enterprise clients such as Airbus and Siemens.”

One area which has caught the headlines in recent weeks is the impact of Brexit on the fortunes of the technology sector. Despite concerns from various corners of the industry, it would not have appeared to have a significant impact on the long-term vision of IBM.

“I don’t think that Brexit coming at the end of the quarter helped us at all, but we obviously finished kind of right where we expected to finish,” said Schroeter. “And when we look at our full view of the year, we don’t see an impact, if you will, that has any real materiality on us.

“What I typically observe in these kinds of instances is that our discussions with our clients have to go through a process of reprioritization. So as they reprioritize, the length of time that takes depends a lot on how much uncertainty they’re faced with. And obviously, the political leadership in Europe and the UK can help reduce that uncertainty, but we didn’t see – again, we don’t think it helped but it didn’t cause us to change our guidance.”

While revenues have continued to fall for the tech giant, it would appear to be heading in the right direction. The strategic imperatives business units are now accounting for a larger proportion of the overall figures, now 38%, indicating the tide may be turning for IBM. Schroeter also highlighted the team are not happy relying solely on the progress of Watson, as IBM has acquired 20 companies in the last twelve months, which are now beginning to contribute in a more significant manner.

Although progress is starting to be seen, it would be worth noting it has not been an entirely smooth ride for IBM. There have been numerous new product launches and advances into new market segments, though this has come at a cost of more than 70,000 redundancies over recent months. While there has been a slight increase in share price following the announcement, it would be worth noting previous performance has had an impact on IBM. Shares in Big Blue have dropped 17% since CEO Virginia Rometty took over in January 2012 while the S&P 500 index rose 70% during the same period.

Google adds image recognition to growing AI portfolio

Googlers having funGoogle has continued its charge on the artificial intelligence market through purchasing French image recognition startup Moodstocks, reports Telecoms.com.

Moodstocks, founded in 2008, develops machine-learning based image recognition technology for smartphones, which has been described by developers as the ‘Shazam for images’. Financials of the agreement have not been confirmed to date.

“Ever since we started Moodstocks, our dream has been to give eyes to machines by turning cameras into smart sensors able to make sense of their surroundings,” Moodstock said on its website. “Today, we’re thrilled to announce that we’ve reached an agreement to join forces with Google in order to deploy our work at scale. We expect the acquisition to be completed in the next few weeks.”

Artificial intelligence is one of the focal points of the Google strategy moving forward, which was confirmed by Google CEO Sundar Pichai during the company’s recent earnings call, though the focus can be dated back to the $625 million DeepMind acquisition in 2014. Although DeepMind is arguably the most advanced AI system in the industry, Telecoms.com readers recently confirmed in a poll Google was the leader in the AI segment, it has seemingly been playing catch up with the likes of Watson and AWS whose offerings have been in the public eye for a substantially longer period of time.

The recognition tools are most likely to be incorporated into the Android operating system, though Moodstocks customers will be able to continue to use the service until the end of their subscription. Moodstocks will be incorporated into Google’s R&D centre in France, where the team will work alongside engineers who are focusing on the development of Youtube and Chrome, two offerings where there could be a link to the Moodstocks technology.

“Many Google services use machine learning (or machine learning) to make them simpler and more useful in everyday life – such as Google Translate, Smart Reply Inbox, or the Google app,” said Vincent Simonet, Head of R&D centre of Google’s French unit. “We have made great strides in terms of visual recognition: now you can search in Google Pictures such as ‘party’ or ‘beach’ and the application will offer you good pictures without you and have never needed to categorize them manually.”

Last month, Google also announced it was expanding its machine research team by opening a dedicated office in Zurich. The team will focus on three areas specifically, machine intelligence, natural language processing & understanding, as well as machine perception.

Elsewhere in the industry, Twitter completed the acquisition of Magic Pony last month reportedly for $150 million. Magic Pony, which offers visual processing technology, was one of the more public moves made by the social media network, which could be seen as unusual as the platform lends itself well to the implementation of AI. Microsoft also announced the purchase of Wand Labs, building on the ‘Conversation-as-a-Platform’ proposition put forward by CEO Satya Nadella at Build 2016.

Image recognition startup joins Google in France

Googlers having funGoogle has continued its charge on the artificial intelligence market through purchasing French image recognition startup Moodstocks, reports Telecoms.com.

Moodstocks, founded in 2008, develops machine-learning based image recognition technology for smartphones, which has been described by developers as the ‘Shazam for images’. Financials of the agreement have not been confirmed to date.

“Ever since we started Moodstocks, our dream has been to give eyes to machines by turning cameras into smart sensors able to make sense of their surroundings,” Moodstock said on its website. “Today, we’re thrilled to announce that we’ve reached an agreement to join forces with Google in order to deploy our work at scale. We expect the acquisition to be completed in the next few weeks.”

Artificial intelligence is one of the focal points of the Google strategy moving forward, which was confirmed by Google CEO Sundar Pichai during the company’s recent earnings call, though the focus can be dated back to the $625 million DeepMind acquisition in 2014. Although DeepMind is arguably the most advanced AI system in the industry, Telecoms.com readers recently confirmed in a poll Google was the leader in the AI segment, it has seemingly been playing catch up with the likes of Watson and AWS whose offerings have been in the public eye for a substantially longer period of time.

The recognition tools are most likely to be incorporated into the Android operating system, though Moodstocks customers will be able to continue to use the service until the end of their subscription. Moodstocks will be incorporated into Google’s R&D centre in France, where the team will work alongside engineers who are focusing on the development of Youtube and Chrome, two offerings where there could be a link to the Moodstocks technology.

“Many Google services use machine learning (or machine learning) to make them simpler and more useful in everyday life – such as Google Translate, Smart Reply Inbox, or the Google app,” said Vincent Simonet, Head of R&D centre of Google’s French unit. “We have made great strides in terms of visual recognition: now you can search in Google Pictures such as ‘party’ or ‘beach’ and the application will offer you good pictures without you and have never needed to categorize them manually.”

Last month, Google also announced it was expanding its machine research team by opening a dedicated office in Zurich. The team will focus on three areas specifically, machine intelligence, natural language processing & understanding, as well as machine perception.

Elsewhere in the industry, Twitter completed the acquisition of Magic Pony last month reportedly for $150 million. Magic Pony, which offers visual processing technology, was one of the more public moves made by the social media network, which could be seen as unusual as the platform lends itself well to the implementation of AI. Microsoft also announced the purchase of Wand Labs, building on the ‘Conversation-as-a-Platform’ proposition put forward by CEO Satya Nadella at Build 2016.

What did we learn from EMC’s data protection report?

a safe place to workEMC has recently released its Global Data Protection Index 2016 where it claims only 2% of the world would be considered ‘leaders’ in protecting their own assets, reports Telecoms.com.

Data has dominated the headlines in recent months as breaches have made customers question how well enterprise organizations can manage and protect data. Combined with transatlantic disagreements in the form of Safe Habour and law agencies access to personal data, the ability to remain secure and credible is now more of a priority for decision makers.

“Our customers are facing a rapidly evolving data protection landscape on a number of fronts, whether it’s to protect modern cloud computing environments or to shield against devastating cyber-attacks,” said David Goulden, CEO of EMC Information Infrastructure. “Our research shows that many businesses are unaware of the potential impact and are failing to plan for them, which is a threat in itself.”

EMC’s report outlined a number of challenges and statistics which claimed the majority of the industry are not in a place they should be with regard to data protection. While only 2% of the industry would be considered leaders in the data protection category, 52% are still evaluating the options available to them. Overall, 13% more businesses suffered data loss in the last twelve months, compared to the same period prior to that.

But what are the over-arching lessons we learned from the report?

Vendors: Less is more

A fair assumption for most people would be the more protection you take on, the more protected you are. This just seems logical. However, the study shows the more vendors you count in your stable, the more data you will leak.

The average data loss instance costs a company 2.36TB of data, which would be considered substantial, however it could be worse. The study showed organizations who used one vendor lost on average 0.83TB per incident, two vendors 2.04TB and three vendors 2.58TB. For those who used four or more vendors, an average of 5.47TB of data was lost per incident.

Common sense would dictate the more layers of security you have, the more secure you will be, however this is only the case if the systems are compatible with each other. It should be highlighted those who lost the larger data sets are likely to be the larger companies, with more data to lose, though the study does seem to suggest there needs to be a more co-ordinated approach to data protection.

And they are expensive…

Using same concept as before, the average cost of lost data was $900,000. For those who have one vendor, the cost was $636,361, for those with two, 789,193 and for those with three vendors the cost was just above the average at 911,030. When companies bring in four or more vendors, the average cost of data loss rises to 1.767 million.

China and Mexico are the best

While it may be surprising, considering many of the latest breakthroughs in the data world have come from Silicon Valley or Israel, China and Mexico are the two countries which would be considered furthest ahead of the trend for data protection.

EMC graded each country on how effective they are were implementing the right technologies and culture to prevent data loss within the organizations themselves. 17 countries featured ahead of the curve including usual suspects of the UK (13.5% ahead of the curve), US (8%), Japan (1%) and South Korea (9%), however China and Mexico led the charge being 20% and 17% respectively ahead.

While it may not be considered that unusual for China to have a strong handle on data within its own boarders, Mexico is a little more surprising (at least to us at Telecoms.com). The country itself has gone through somewhat of a technology revolution in recent years, growing in the last 20 years from a country where only 10% of people had mobile through to 68% this year, 70% of which are smartphones. Mexico is now the 11th largest economy in terms of purchasing power, with the millennials being the largest demographic. With the population becoming more affluent, and no longer constrained by the faults of pre-internet world, the trend should continue. Keep up the good work Mexico.

Human error is still a talking point

When looking at the causes of data loss, the results were widespread, though the causes which cannot be controlled were at the top of the list. Hardware failure, power loss and software failure accounted for 45%, 35% and 34% respectively.

That said the industry does now appear to be taking responsibility for the data itself. The study showed only 10% of the incidents of data loss was blamed on the vendor. A couple of weeks ago we spoke to Intel CTO Raj Samani who highlighted to us the attitude towards security (not just data protection) needs to shift, as there are no means to outsource risk. Minimizing risk is achievable, but irrelevant of what agreements are undertaken with vendors, the risk still remains with you. As fewer people are blaming the vendors, it would appear this responsibility is being realized.

Human error is another area which still remains high on the agenda, as the study showed it accounts for 20% of all instances of data loss. While some of these instances can be blamed on leaving a laptop in the pub or losing a phone on the train, there are examples where simple mistakes in the workplace are to blame. These will not be removed, as numerous day-to-day decisions are based off the back of intuition and gut-feel, and a necessity for certain aspects of the business.

An area which could be seen as a potential danger would be that of artificial intelligence. As AI advances as a concept, the more like humans they will become, and thus more capable of making decisions based in intuition. If this is to be taken as the ambition, surely an intuitive decision making machine would offer a security defect in the same way a human would. Admittedly the risk would be substantially smaller, but on the contrary, the machine would be making X times many more decision than the human.

All-in-all the report raises more questions than provides answers. While security has been pushed to the top of the agenda for numerous organizations, receiving additional investment and attention, it does not appear the same organizations are getting any better at protecting themselves. The fact 13% more organizations have been attacked in the last 12 month suggests it could be getting worse.

To finish, the study asked whether an individual felt their organization was well enough protected. Only 18% believe they are.

Twitter acquires machine learning start-up Magic Pony

Twitter has stepped up its efforts in the machine learning arena after announcing the acquisition of visual processing technology company Magic Pony.

While the company claims machine learning is central to the brands capabilities, it has been relatively quiet in the market segment in comparison to industry heavy weights such as IBM, Google and Microsoft. This is the third acquisition the team has made in this area, reported to be in the range of $150 million, following the purchase of Whetlab last year and Mad Bits in 2014, compared to Google who acquired Jetpac, Dark Blue Labs and Vision Factory, as well as $500 million on DeepMind, all in 2014.

“Machine learning is increasingly at the core of everything we build at Twitter,” said Jack Dorsey, Twitter CEO. “Magic Pony’s machine learning technology will help us build strength into our deep learning teams with world-class talent, so Twitter can continue to be the best place to see what’s happening and why it matters, first. We value deep learning research to help make our world better, and we will keep doing our part to share our work and learnings with the community.”

The acquisition follows Twitter’s announcement last week advertisers will now be able to utilize emoji keyword targeting for Twitter Ads. Although a simple proposition in the first instance, the new features did open up the opportunity for machine learning enhanced advertising solutions.

Magic Pony, which was founded in 2014 and currently has 11 employees, was acquired to bolster the visual experiences that are delivered across Twitter apps. The team will link up with Twitter Cortex, the in-house machine learning department, to improve image processing expertise.

The technology itself makes use of the abilities of convolutional neural networks to scale-out an image. By taking the information in a picture, the technology imagines a larger and more in-depth image by scaling out the detail which it sees. Much in the same way a human can imagine the rest of a car by seeing the door, the technology learns lessons from previous experiences and applies logical decisions moving forward.

Magic Pony itself was initially supported by investment from Octopus Ventures who have seemingly found a specialty in finding promising AI start-ups. Prior to Magic Pony being acquired by Twitter, Octopus Ventures invested it Evi which was acquired by Amazon in 2012, and SwiftKey which was acquired by Microsoft this year.

“Today marks a great day for the Magic Pony team,” said Luke Hakes, Investment Director at Octopus Ventures. “We’re proud to have believed in the concept early on and to then have had the privilege of joining their journey. The technology Magic Pony has developed is revolutionary and pushes the boundaries of what is possible with AI in the video space.

“The UK continues to grow as the ‘go-to’ place for companies looking to build best in breed AI technology – Octopus has been fortunate to work with the founders of three companies in this space that have gone on to be acquired, with Evi and Amazon, SwiftKey and Microsoft, and now Magic Pony and Twitter. We are excited for the Magic Pony team, but also to take what we have learnt on the last three journeys and help the next generation of entrepreneurs lead the way in the on-going AI revolution.”

Machine learning front and centre of R&D for Microsoft and Google

Dear Future Im Ready, message on paper, smart phone and coffee on tableMicrosoft and Google have announced plans to expand their machine learning capabilities, through acquisition and new research offices respectively, reports Telecoms.com.

Building on the ‘Conversation-as-a-Platform’ proposition put forward by CEO Satya Nadella at Build 2016, the Microsoft team has announced plans to acquire Wand Labs. The purchase will add weight to the ‘Conversation-as-a-Platform’ strategy, as well as supporting innovation ambitions for Bing intelligence.

“Wand Labs’ technology and talent will strengthen our position in the emerging era of conversational intelligence, where we bring together the power of human language with advanced machine intelligence,” said David Ku, Corporate Vice President of the Information Platform Group on the company’s official blog. “It builds on and extends the power of the Bing, Microsoft Azure, Office 365 and Windows platforms to empower developers everywhere.”

More specifically, Wand Labs adds expertise in semantic ontologies, services mapping, third-party developer integration and conversational interfaces, to the Microsoft engineering team. The ambition of the overarching project is to make the customers experience more seamless by harnessing human language in an artificial environment.

Microsoft’s move into the world of artificial intelligence and machine learning has not been a smooth ride to date, though this has not seemed to hinder investment. Back in March, the company’s AI inspired Twitter account Tay went into melt-down mode, though the team pushed forward, updating its Cortana Intelligence Suite and releasing its Skype Bot Platform. Nadella has repeatedly highlighted artificial intelligence and machine learning is the future for the company, stating at Build 2016:

“As an industry, we are on the cusp of a new frontier that pairs the power of natural human language with advanced machine intelligence. At Microsoft, we call this Conversation-as-a-Platform, and it builds on and extends the power of the Microsoft Azure, Office 365 and Windows platforms to empower developers everywhere.”

Google’s efforts in the machine learning world have also been pushed forward this week, as the team announced dedicated machine learning research based in the Zurich offices, on its blog. The team will focus on three areas specifically, machine intelligence, natural language processing & understanding, as well as machine perception.

Like Microsoft, Google has prioritized artificial intelligence and machine learning, though both companies will be playing catch-up with the likes of IBM and AWS, whose AI propositions have been in the market for some time. Back in April, Google CEO Sundar Pichai said in the company’s earnings call “overall, I do think in the long run, I think we will evolve in computing from a mobile first to an AI first world,” outlining the ambitions of the team.

Google itself already has a number of machine learning capabilities incorporated in its product portfolio, those these could be considered as relatively rudimentary. Translate, Photo Search and SmartReply for Inbox already contains aspects of machine learning, though the team are targeting more complex and accurate competencies.

Elsewhere, Twitter has announced on their blog advertisers will now be able to utilize emoji keyword targeting for Twitter Ads. This new feature uses emoji activity as a signal of a person’s mood or mind set, allowing advertisers to more effectively communicate marketing messages minimizing the potential for backlash of disgruntled twitter users. Although the blog does not state the use of machine learning competencies, it does leave the opportunity for future innovation in the area.

IBM launches weather predictor Deep Thunder for The Weather Company

cloud storm rainIBM’s Weather Company has announced the launch of Deep Thunder to help companies predict the actual impact of various weather conditions.

By combining hyper-local, short-term custom forecasts developed by IBM Research with The Weather Company’s global forecast model the team hope to improve the accuracy of weather forecasting. Deep Thunder will lean on the capabilities of IBM’s machine learning technologies to aggregate a variety of historical data sets and future forecasts to provide fresh new guidance every three hours.

“The Weather Company has relentlessly focused on mapping the atmosphere, while IBM Research has pioneered the development of techniques to capture very small scale features to boost accuracy at the hyper local level for critical decision making,” said Mary Glackin, Head of Science & Forecast Operations for The Weather Company. “The new combined forecasting model we are introducing today will provide an ideal platform to advance our signature services – understanding the impacts of weather and identifying recommended actions for all kinds of businesses and industry applications.”

The platform itself will combine more than 100 terabytes of third-party data daily, as well as data collected from the company’s 195,000 personal weather stations. The offering can be customized to suit the location of various businesses, with IBM execs claiming hyper-local forecasts can be reduced to between a 0.2 to 1.2 mile resolution, while also taking into account other factors for the locality such as vegetation and soil conditions.

Applications for the new proposition can vary from the agriculture to city planning & maintenance to validating insurance claims, however IBM has also stated consumer influences can also be programmed into the platform, meaning retailers could manage their supply chains and understand what should be stocked on shelves with the insight.