Could you tell us a little about what BMC does and your role within the company? BMC delivers industry-leading software solutions for IT automation, orchestration, operations, and service management to help organisations free up time and space as they continue to drive their digital initiatives forward. We work with thousands of customers and partners around… Read more »
Against the backdrop of the climate crisis and increasing energy costs, comes the worrying new findings that a lack of data management accountability in the UK means that two-fifths (41%) of data is being stored for no reason. Conducted by NetApp, the Data Waste Index surveyed IT leaders in data management roles in the UK and… Read more »
Teradata has integrated Teradata VantageCloud, an cloud analytics and data platform, with Microsoft Azure Machine Learning (Azure ML). VantageCloud’s scalability, openness and analytics – ClearScape Analytics – combined with Azure ML’s ability to simplify and accelerate the ML lifecycle could help customers unlock the full value of their data, even in the most complex and… Read more »
Confluent, a data streaming specialist, has signed a definitive agreement to acquire Immerok, a contributor to Apache Flink – a powerful technology for building stream processing applications and one of the most popular Apache open source projects. Immerok has developed a cloud-native, fully managed Flink service for customers looking to process data streams at a… Read more »
Software intelligence company Dynatrace has extended its Grail causational data lakehouse to power business analytics. As a result, the Dynatrace platform can instantly capture business data from first and third-party applications at a massive scale without requiring engineering resources or code changes. It prioritises business data separately from observability data and stores, processes, and analyzes… Read more »
HPE has unveiled new capabilities and partnerships to bring real-time data analytics and IoT insight to the network edge, reports Telecoms.com.
The team claims its new offerings, Edgeline EL1000 and Edgeline EL4000, are the first converged systems for the Internet of Things, capable of integrating data capture, analysis and storage at the source of collection. Transport and storage of data for analytics are becoming prohibitively expensive, the company claims, so the new products offer decision making insight at the network edge to reduce costs and complexities.
HPE claims the new offerings are capable of delivering heavy-duty data analytics and insights, graphically intense data visualization, and real-time response at the edge. Until recently, the technology to drive edge analytics has not been available, meaning data has had to be transferred to the network core to acquire insight. The team have also announced the launch of Vertica Analytics Platform which offers in-database machine learning algorithms and closed-loop analytics at the network edge.
“Organizations that take advantage of the vast amount of data and run deep analytics at the edge can become digital disrupters within their industries,” said Mark Potter, CTO of the Enterprise Group at HPE. “HPE has built machine learning and real time analytics into its IoT platforms, and provides services that help customers understand how data can best be leveraged, enabling them to optimize maintenance management, improve operations efficiency and ultimately, drive significant cost savings.”
The news follows an announcement from IBM and Cisco last week which also focused on IoT at the edge. Alongside the product launches from HPE, the team also announced a partnership with GE Digital to create more relevant propositions for industry. The partnership focuses on combining HPE technical know-how with GE’s industrial expertise and its Predix platform to create IoT-optimized hardware and software. GE’s Predix platform will be a preferred software solution for HPE’s industrial-related use cases and customers.
While the promise of IoT given the industry plenty to get excited about in recent years, the full potential has been difficult to realize due to the vast amount of data which needs to be transported to the network core to process and drive insight from. Although it would seem logical to process the data at the source of collection, technical capabilities have not been at the point where this has been possible. Recent advances from the IBM/Cisco and HPE/GE partnerships are removing the need to transfer information, and also the risk of bottleneck situations, points of failure and storage expenses from the IoT process.
“In order to fully take advantage of the Industrial IoT, customers need data-centre-grade computing power, both at the edge – where the action is – and in the cloud,” said Potter. “With our advanced technologies, customers are able to access data centre-level compute at every point in the Industrial IoT, delivering insight and control when and where needed.”
Applications for the edge-analytics proposition could be quite wide, ranging from production lines in Eastern Europe to oil rigs in the North Sea to smart energy grids in Copenhagen. It would appear the team are not only targeting industrial segments, where IoT could ensure faster and more accurate decision making in the manufacturing process for instance, but also those assets which do not have reliable or consistent connectivity.
Big data as a concept has in fact been around longer than computer technology, which would surprise a number of people.
Back in 1944 Wesleyan University Librarian Fremont Rider wrote a paper which estimated American university libraries were doubling in size every sixteen years meaning the Yale Library in 2040 would occupy over 6,000 miles of shelves. This is not big data as most people would know it, but the vast and violent increase in the quantity and variety of information in the Yale library is the same principle.
The concept was not known as big data back then, but technologists today are also facing a challenge on how to handle such a vast amount of information. Not necessarily on how to store it, but how to make use of it. The promise of big data, and data analytics more generically, is to provide intelligence, insight and predictability but only now are we getting to a stage where technology is advanced enough to capitalise on the vast amount of information which we have available to us.
Back in 2003 Google wrote a paper on its MapReduce and Google File System which has generally been attributed to the beginning of the Apache Hadoop platform. At this point, few people could anticipate the explosion of technology which we’ve witnessed, Cloudera Chairman and CSO Mike Olson is one of these people, but he is also leading a company which has been regularly attributed as one of the go-to organizations for the Apache Hadoop platform.
“We’re seeing innovation in CPUs, in optical networking all the way to the chip, in solid state, highly affordable, high performance memory systems, we’re seeing dramatic changes in storage capabilities generally. Those changes are going to force us to adapt the software and change the way it operates,” said Olson, speaking at the Strata + Hadoop event in London. “Apache Hadoop has come a long way in 10 years; the road in front of it is exciting but is going to require an awful lot of work.”
Analytics was previously seen as an opportunity for companies to look back at its performance over a defined period, and develop lessons for employees on how future performance can be improved. Today the application of advanced analytics is improvements in real-time performance. A company can react in real-time to shift the focus of a marketing campaign, or alter a production line to improve the outcome. The promise of big data and IoT is predictability and data defined decision making, which can shift a business from a reactionary position through to a predictive. Understanding trends can create proactive business models which advice decision makers on how to steer a company. But what comes next?
Cloudera Chairman and CSO Mike Olsen
For Olsen, machine learning and artificial intelligence is where the industry is heading. We’re at a stage where big data and analytics can be used to automate processes and replace humans for simple tasks. In a short period of time, we’ve seen some significant advances in the applications of the technology, most notably Google’s AlphaGo beating World Go champion Lee Se-dol and Facebook’s use of AI in picture recognition.
Although computers taking on humans in games of strategy would not be considered a new PR stunt, IBM’s Deep Blue defeated chess world champion Garry Kasparov in 1997, this is a very different proposition. While chess is a game which relies on strategy, go is another beast. Due to the vast number of permutations available, strategies within the game rely on intuition and feel, a complex task for the Google team. The fact AlphaGo won the match demonstrates how far researchers have progressed in making machine-learning and artificial intelligence a reality.
“In narrow but very interesting domains, computers have become better than humans at vision and we’re going to see that piece of innovation absolutely continue,” said Olsen. “Big Data is going to drive innovation here.”
This may be difficult for a number of people to comprehend, but big data has entered the business world; true AI and automated, data-driven decision may not be too far behind. Data is driving the direction of businesses through a better understanding of the customer, increase the security of an organization or gaining a better understanding of the risk associated with any business decision. Big data is no longer a theory, but an accomplished business strategy.
Olsen is not saying computers will replace humans, but the number of and variety of processes which can be replaced by machines is certainly growing, and growing faster every day.
IBM has expanded its portfolio of software-defined infrastructure solutions adding cognitive features to speed up analysis of data, integrate Apache Spark and help accelerate research and design, the company claims.
The new offering will be called IBM Spectrum Computing and is designed to aide companies to extract full value from their data through adding scheduling capabilities to the infrastructure layer. The product offers workload and resource management features to research scientists for high-performance research, design and simulation applications. The new proposition focuses on three areas.
Firstly, Spectrum Computing works with cloud applications and open source frameworks to assist in sharing resources between the programmes to speed up analysis. Secondly, the company believes it makes the adoption of Apache Spark simpler. And finally, the ability to share resources will accelerate research and design by up to 150 times, IBM claims.
By incorporating the cognitive computing capabilities into the software-defined infrastructure products, IBM believes the concept on the whole will become more ‘intelligent’. The scheduling competencies of the software will increase compute resource utilization and predictability across multiple workloads.
The software-defined data centre has been steadily growing, and is forecasted to continue its healthy growth over the coming years. Research has highlighted the market could be worth in the region of $77.18 Billion by 2020, growing at a CAGR of 28.8% from 2015 to 2020. The concept on the whole is primarily driven by the attractive feature of simplified scalability as well as the capability of interoperability. North America and Asia are expected to hold the biggest market share worldwide, though Europe as a region is expected to grow at a faster rate.
“Data is being generated at tremendous rates unlike ever before, and its explosive growth is outstripping human capacity to understand it, and mine it for business insights,” said Bernard Spang, VP for IBM Software Defined Infrastructure. “At the core of the cognitive infrastructure is the need for high performance analytics of both structured and unstructured data. IBM Spectrum Computing is helping organizations more rapidly adopt new technologies and achieve greater, more predictable performance.”
Wipro has announced it has open sourced its big data solution Big Data Ready Enterprise (BDRE), partnering with California based Hortonworks to push the initiative forward.
The company claims the BDRE offering addresses the complete lifecycle of managing data across enterprise data lakes, allowing customers to ingest, organize, enrich, process, analyse, govern and extract data at a faster pace. BDRE is released under the Apache Public License v2.0 and hosted on GitHub. Teaming up with Hortonworks will also give the company additional clout in the market, at Hortonworks is generally considered one of the top three Hadoop distribution vendors in the market.
“Wipro takes pride in being a significant contributor to the open source community, and the release of BDRE reinforces our commitment towards this ecosystem,” said Bhanumurthy BM, COO at Wipro. “BDRE will not only make big data technology adoption simpler and effective, it will also open opportunities across industry verticals that organizations can successfully leverage. Being at the forefront of innovation in big data, we are able to guide organizations that seek to benefit from the strategic, financial, organizational and technological benefits of adopting open source technologies.”
Companies open sourcing their own technologies has become somewhat of a trend in recent months, as the product owners themselves would appear to be moving towards a service model as opposed to traditional vendor. According to ‘The Open Source Era’, an Oxford Economics Study which was commissioned by Wipro, 64% of respondents believe that open source will drive Big Data efforts in the next three years.
The report also claims open source has become a foundation stone of the technology roadmap of a number of businesses, 75% of respondent believe integration between legacy and open source is one of the main challenges and 52% said open source is already supporting development of new products and services.
IBM and Cisco have extended a long-standing partnership to enable real-time IoT analytics and insight at the point of data collection.
The partnership will focus on combining the cognitive computing capabilities of IBM’s Watson with Cisco’s analytics competencies to support data action and insight at the point of collection. The team are targeting companies who operate in remote environments or on the network edge, for example oil rigs, where time is of the essence but access to the network can be limited or disruptive.
The long promise of IoT has been to increase the amount of data organizations can collect, which once analysed can be used to gain a greater understanding of a customer, environment or asset. Cloud computing offers organizations an opportunity to realize the potential of real-time insight, but for those with remote assets where access to high bandwidth connectivity is not a given, the promise has always been out of reach.
“The way we experience and interact with the physical world is being transformed by the power of cloud computing and the Internet of Things,” said Harriet Green, GM for IBM Watson IoT Commerce & Education. “For an oil rig in a remote location or a factory where critical decisions have to be taken immediately, uploading all data to the cloud is not always the best option.
“By coming together, IBM and Cisco are taking these powerful IoT technologies the last mile, extending Watson IoT from the cloud to the edge of computer networks, helping to make these strong analytics capabilities available virtually everywhere, always.”
IoT insight at the point of collection has been an area of interest to enterprise for a number of reasons. Firstly, by decreasing the quantity of data which has to be moved transmission costs and latency are reduced and the quality of service is improved. Secondly, the bottleneck of traffic at the network core can potentially be removed, reducing the likelihood of failure. And finally, the ability to virtualize on the network edge can extend the scalability of an organization.
ABI Research has estimated 90% of data which is collected through IoT connected devices are stored or processed locally, making it inaccessible for real-time analytics, therefore it must be transferred to another location for analysis. As the number of these devices increases, the quantity of data which must be transferred to another location, stored and analysed also increases. The cost of data transmission and storage could soon prohibit some organizations from achieving the goal of IoT. The new team are hoping the combination of Cisco’s edge analytics capabilities and the Watson cognitive solutions will enable real-time analysis at the scene, thus removing a number of the challenges faced.
“Together, Cisco and IBM are positioned to help organizations make real-time informed decisions based on business-critical data that was often previously undetected and overlooked,” said Mala Anand, SVP of the Cisco Data & Analytics Platforms Group. “With the vast amount of data being created at the edge of the network, using existing Cisco infrastructure to perform streaming analytics is the perfect way to cost-effectively obtain real-time insights. Our powerful technology provides customers with the flexibility to combine this edge processing with the cognitive computing power of the IBM Watson IoT Platform.”