Category Archives: Big Data

IBM, Revive Vending set up London Café that brews analytics-based insights

Honest Café is an unmanned coffee shop powered by Watson analytics

Honest Café is an unmanned coffee shop powered by Watson analytics

IBM is working with Revive Vending on London-based cafés that use Watson analytics to help improve customer service and form a better understanding of its customers.

The ‘Honest Café’ – there are three in London, with four more planned – are all unmanned; instead, the company deploys high-end vending machines at each location, which all serve a mix of health-conscious snacks, juices, food and beverages.

The company is using Watson analytics to compensate for the lack of wait staff, with the cognitive compute platform being deployed to troll through sales data in a bid to unearth buying patterns and improve its marketing effectiveness.

“Because Honest is unmanned, it’s tough going observing what our customers are saying and doing in our cafes,” said Mark Summerill, head of product development at Honest Café. “We don’t know what our customers are into or what they look like. And as a start-up it’s crucial we know what sells and what areas we should push into.”

“We lacked an effective way of analyzing the data,” Summerill said. “We don’t have dedicated people on this and the data is sometimes hard pull to together to form a picture.”

“Watson Analytics could help us make sure we offer the right customers the right drink, possibly their favorite drink,” he said.

It can leverage the buying patterns by launching promotional offers on various goods to help improve sales, and it can also correlate the data with social media information to better inform its understanding of Honest Café customers.

“We identified that people who buy as a social experience have different reasons than those who dash in and out grabbing just the one drink,” Summerill said. “They also have a different payment method, the time of the day differs and the day of the week. Knowing this, we can now correctly time promotions and give them an offer or introduce a product that is actually relevant to them.”

“Having direct access to Twitter data for insights into what our buyers are talking about is going to help augment our view and understanding of our customers even further,” he added.

Big Data and Cloud Computing Service Set to Improve Healthcare

Suvro Ghosh, founder and CEO of Texas-based ClinZen LLC, has developed a cloud application based upon Big Data that will help facilitate healthcare access for those living in the densely populated Indian City of Kolkata. The new platform, named 24by7.Care also aims to connect those living in rural areas to those in the metropolis.

Ghosh has reported to the media, “Given Kolkata’s dense population and the plethora of problems regarding accessibility to healthcare at any given time, we need to build a framework based on latest technologies such as cloud computing and Big Data. The 24by7.Care platform is a database dependent one and we are currently building a data base.”

Big data is extremely large data sets that may be analyzed computationally to reveal patterns, trends, and associations, especially relating to human behavior and interactions. This platform consisting of both big data and cloud computing would be able to aid Kolkata’s healthcare system by serving  needs such as booking a doctor or admitting a patient to the hospital.

cloud-computing

 

This new healthcare system is initially set to take place in Kolkata and will be available on every computing platform.  Cloud computing allows computing facilities to be accessed from anywhere over the network on a multitude of devices ranging from smartphones to laptops and desktop computers.  This system increases accessibility pertaining to information about healthcare and will therefore improve the current system that is in place in Kolkata.

This new service is set to be implemented in three months.

The post Big Data and Cloud Computing Service Set to Improve Healthcare appeared first on Cloud News Daily.

Pivotal buys Quickstep Technologies in big data play

Pivotal is acquiring Quickstep Technologies to boost SQL performance

Pivotal is acquiring Quickstep Technologies to boost SQL performance

Pivotal has acquired Quickstep Technologies, a query execution technology developer, for an undisclosed sum. The company said the move could vastly improve the performance of its big data solutions.

Quickstep’s technology was developed at the University of Wisconsin-Madison by Jignesh Patel, professor of computer sciences and a team of developers at the school, in part with funding from the US National Science Foundation. It’s a relational data processing engine that incorporates a technology called Bitweaving, which uses various techniques to reduce the number of cycles to evaluate and compute a predicate across a batch of code, the result being a massive improvement in performance when asking a database a question.

Patel is no stranger to the database space. His thesis work was commercialised by NCR when it was acquired by Teradata, and he also co-founded Locomatix, a startup that designed a platform to power real-time data-driven mobile services, which became part of Twitter two years ago.

“In the Quickstep project we have rethought from the ground up the algorithms that make up the DNA of data platforms so that the platform can deliver unprecedented speed for data analytics. It is time to move our ideas from research to actual products,” Patel said. “There is no better home for this technology than at Pivotal given Pivotal’s formidable track record in delivering real value to their customers in big data.”

Pivotal said the technology will be integrated as a new query execution framework for Greenplum Database and Pivotal HAWQ, which it claims will “provide orders of magnitude increase in performance for advanced analytics, machine learning, and advanced data science use cases.”

Sundeep Madra, vice president, data product group, Pivotal said: “Enterprises are seeking ever faster speeds for their data so that they can affect outcomes in real time. Quickstep brings to Pivotal a fresh way of thinking about data, one aligned to new capabilities in hardware and demanding expectations today’s businesses have. We look forward to bringing this technology to our customers, and welcome the Quickstep team to the Pivotal family.”

IBM, UK gov ink $313m deal to promote big data, cognitive compute research

IBM and the UK government are pouring £313m into big data and cognitive computing R&D

IBM and the UK government are pouring £313m into big data and cognitive computing R&D

The UK government has signed a deal with IBM that will see the two parties fund a series of initiatives aimed at expanding cognitive computing and big data research.

The £313m partnership will see the UK government commit £113m to expand the Hartree Centre at Daresbury, a publicly funded facility geared towards reducing the cost and improving the efficiency and user-friendliness of high performance computing and big data for research and development purposes.

IBM said it will further support the project with technology and onsite expertise worth up to £200m, including access to the company’s cognitive computing platform Watson. The company will also place 24 IBM researchers at the Centre, who will help the researchers commercialise any promising innovations developed there.

The organisations will also explore how to leverage OpenPower-based systems for high performance computing.

“We live in an information economy – from the smart devices we use every day to the super-computers that helped find the Higgs Boson, the power of advanced computing means we now have access to vast amounts of data,” said UK Minister for Universities and Science Jo Johnson.

“This partnership with IBM, which builds on our £113 million investment to expand the Hartree Centre, will help businesses make the best use of big data to develop better products and services that will boost productivity, drive growth and create jobs.”

David Stokes, chief executive for IBM in the UK and Ireland said: “We’re at the dawn of a new era of cognitive computing, during which advanced data-centric computing models and open innovation approaches will allow technology to greatly augment decision-making capabilities for business and government.”

“The expansion of our collaboration with STFC builds upon Hartree’s successful engagement with industry and its record in commercialising technological developments, and provides a world-class environment using Watson and OpenPower technologies to extend the boundaries of Big Data and cognitive computing,” he added.

Woodside to deploy IBM Watson to improve oil & gas operations

Woodside will use Watson to improve employee training and oil & gas operations

Woodside will use Watson to improve employee training and oil & gas operations

Australian oil and gas firm Woodside will deploy IBM’s Watson-as-a-Service in order to improve operations and employee training, the companies announced this week.

The energy firm plans to use the cloud-based cognitive compute service to help train engineers and operations specialists on fabricating and managing oil and gas infrastructure and facilities.

The company said the cognitive advisory service it plans to use, ‘Lessons Learned’, will help improve operational processes and outcomes and include over thirty years of collective knowledge and experience operating oil and gas assets.

Woodside Senior vice president strategy, science and technology Shaun Gregory said the move is a part of a broader strategy to use data more intelligently at the firm.

“We are bringing a new toolkit to the company in the form of evidence based predictive data science that will bring down costs and increase efficiencies across our organization,” Gregory said.

“Data science, underpinned by an exponentially increasing volume and variety of data and the rapidly decreasing cost of computing, is likely to be a major disruptive technology in our industry over the next decade. Our plan is to turn all of this data into a predictive tool where every part of our organisation will be able to make decisions based on informed and accurate insights.”

Kerry Purcell, managing director, IBM Australia and New Zealand said: “Here in Australia IBM Watson is transforming how banks, universities, government and now oil and gas companies capitalise on data, helping them discover completely new insights and deliver new value.”

IBM opens IoT, cloud, big data studio in Shanghai

IBM has opened another studio in Shanghai to target IoT, cloud and big data development

IBM has opened another studio in Shanghai to target IoT, cloud and big data development

IBM has opened another studio aimed at attracting design and digital experts to work with clients on digital solutions using the company’s mobile, big data and cloud technologies, this time in Shanghai.

Based at IBM’s Yangpu and Zhangjiang offices, the hub will host local IBM Design and Interactive Experience teams as well as digital service designers and developers.

“People’s expectations of enterprise technology has changed because of great design they see in devices and apps they use at work and at play,” said Phil Gilbert, general manager, IBM Design. “Our studios around the world bring design into everything we do and change the way we work to transform how enterprise technology is created, with client experience at the centre.”

The company said the studio will be a space for clients in industries such as healthcare, financial services and retail that are keen to develop new digital services in collaboration with IBM; it has about 20 of these studios located around the world.

Earlier this year the company opened a studio at its Southbank location in London which hosts employees specialising in big data, cloud and mobile products and services – including Bluemix, the company’s platform as a service.

Big Data in Healthcare

The biggest industry where big data is mentioned the most is in healthcare. It makes sense because of the incredibly large amount of data the industry obtains and analyzes each day. Healthcare is trying to become lest wasteful and more cost effective, which has led to many new devices designed to automate the data collection for medical professionals.

Writer Trevir Nath recently wrote an article outlining the six main ways healthcare could take advantage of big data and cloud services. These include reducing waste and costs, improving both patient care and pharmaceutical research and development, lessening government subsidies, and improving digital health monitoring. In the article, he mentions how better research and development can lead to more positive patient outcomes, data transparency, and a significant amount of savings, on the scale of billions of dollars.

 

technologyadvice-wearable-study

 

In the Wall Street Journal earlier this month, the director of health policy at Thomas Jefferson University stated that the use of big data and cloud technology could help create highly customized health care based on our individual genome while at the same time analyzing patterns of a broad demographic. He also said that electronic health records would replace the systems used today, and that almost real time analysis of a patient data could help track things such as disease outbreak and spreads.

Another way this technology comes into play is through wearable technology, such as fitness tracking watches. At the Consumer Electronics Show earlier this year, many of the large technology companies had wearable tech incorporated into their showcases. The data this technology obtains can prove extremely valuable to healthcare professionals. Healthcare providers must acknowledge this and be ready to utilize this information and technology in order to provide better care.

The post Big Data in Healthcare appeared first on Cloud News Daily.

Data-as-a-service provider Delphix buys data-masking specialist Axis

Delphix has acquired data masking and de-identification specialist Axis Technology Software

Delphix has acquired data masking and de-identification specialist Axis Technology Software

Data management provider Delphix has acquired Axis Technology Software, a data masking software specialist, for an undisclosed sum.

Delphix offers software that helps users virtualise and deploy large application databases (i.e. ERP) on private and public cloud infrastructure, while Axis offers data masking and de-identification software, particularly for large financial service firms, healthcare providers and insurers.

Delphix said the move will give it a boost in verticals where Axis is already embedded, and help strengthen its core offering. By adding data masking and de-identification capabilities to its data services suite, the company hopes to improve the appeal of its offerings from a security and privacy perspective.

“We believe that data masking—the ability to scramble private information such as national insurance numbers and credit card information—has become a critical requirement for managing data across development, testing, training and reporting environments,” said Jedidiah Yueh, chief executive of Delphix. “With Axis, Delphix not only accelerates application projects, but also ​increases​ data security for our customers.”

Following the acquisition Michael Logan, founder and chief executive of Axis Technology Software will join Delphix as vice president of data masking, where he will be responsible for driving development and adoption of the feature set Axis brings to Delphix.

“We’ve built a sophisticated platform to secure customer data at Axis, proven at many of the world’s biggest banks and enterprises,” Logan said.

“The integrated power of our platforms will provide our customers the ability to protect their data where and when they need it.”

Data-as-a-service provider Delphix buys data-masking specialist Axis

Delphix has acquired data masking and de-identification specialist Axis Technology Software

Delphix has acquired data masking and de-identification specialist Axis Technology Software

Data management provider Delphix has acquired Axis Technology Software, a data masking software specialist, for an undisclosed sum.

Delphix offers software that helps users virtualise and deploy large application databases (i.e. ERP) on private and public cloud infrastructure, while Axis offers data masking and de-identification software, particularly for large financial service firms, healthcare providers and insurers.

Delphix said the move will give it a boost in verticals where Axis is already embedded, and help strengthen its core offering. By adding data masking and de-identification capabilities to its data services suite, the company hopes to improve the appeal of its offerings from a security and privacy perspective.

“We believe that data masking—the ability to scramble private information such as national insurance numbers and credit card information—has become a critical requirement for managing data across development, testing, training and reporting environments,” said Jedidiah Yueh, chief executive of Delphix. “With Axis, Delphix not only accelerates application projects, but also ​increases​ data security for our customers.”

Following the acquisition Michael Logan, founder and chief executive of Axis Technology Software will join Delphix as vice president of data masking, where he will be responsible for driving development and adoption of the feature set Axis brings to Delphix.

“We’ve built a sophisticated platform to secure customer data at Axis, proven at many of the world’s biggest banks and enterprises,” Logan said.

“The integrated power of our platforms will provide our customers the ability to protect their data where and when they need it.”

Google adds Crate to SQL services on GCE

Google has been on a big data push

Google has been on a big data push

Google has added open source distributed SQL data store Crate to the Google Compute Engine arsenal, the latest in a series of moves aimed at bolstering the company’s data services.

Crate is a distributed open source data store built on a high availability “shared-nothing” architecture that automatically shards and distributes data across all of nodes (and maintains several replicas for fault tolerance).

It uses SQL syntax but packs some NoSQL goodies as well (Elasticsearch, Presto, Lucene are among the components it implements).

“This means when a new node is added, the cluster automatically rebalances and can self-heal when a node is removed. All data is indexed, optimized, and compressed on ingest and is accessible using familiar SQL syntax through a RESTful API,” explained Tyler Randles, evangelist at Crate.

“Crate was built so developers won’t need to “glue” several technologies together to store documents or BLOBs, or support real-time search. It also helps dev-ops by eliminating the need for manual tuning, sharding, replication, and other operations required to keep a large data store in good health.”

The move is yet another attempt by Google to bolster its data services. Earlier this week the company revealed Bigtable, a fully managed NoSQL database service the company said combines its own internal database technology with open source Apache HBase APIs.

Last month the company announced the beta launch of Google Cloud Dataflow, a Java-based service that lets users build, deploy and run data processing pipelines for other applications like ETL, analytics, real-time computation, and process orchestration, while abstracting away all the other infrastructure bits like cluster management.