Archivo de la categoría: MapReduce

Google reveals Bigtable, a NoSQL service based on what it uses internally

Google has punted another big data service, a variant of what it uses internally, into the wild

Google has punted another big data service, a variant of what it uses internally, into the wild

Search giant Google announced Bigtable, a fully managed NoSQL database service the company said combines its own internal database technology with open source Apache HBase APIs.

The company that helped give birth to MapReduce and its sister Hadoop is now making available the same non-relational database tech driving a number of its services including Google Search, Gmail, and Google Analytics.

Google said Bigtable is powered by BigQuery underneath, and is extensible through the HBase API (which provides real-time read / write access capabilities).

“Google Cloud Bigtable excels at large ingestion, analytics, and data-heavy serving workloads. It’s ideal for enterprises and data-driven organizations that need to handle huge volumes of data, including businesses in the financial services, AdTech, energy, biomedical, and telecommunications industries,” explained Cory O’Connor, product manager at Google.

O’Connor said the service, which is now in beta, can deliver over two times the performance of its direct competition (which will likely depend on the use case), and has a TCO of less than half that of its direct competitors.

“As businesses become increasingly data-centric, and with the coming age of the Internet of Things, enterprises and data-driven organizations must become adept at efficiently deriving insights from their data. In this environment, any time spent building and managing infrastructure rather than working on applications is a lost opportunity.”

Bigtable is Google’s latest move to bolster its data services, a central pillar of its strategy to attract new customers to its growing platform. Last month the company announced the beta launch of Google Cloud Dataflow, a Java-based service that lets users build, deploy and run data processing pipelines for other applications like ETL, analytics, real-time computation, and process orchestration, while abstracting away all the other infrastructure bits like cluster management.

Google’s Dremel is the Holy Grail of Big Data: Really Big, Really Fast, Really Simple

First Google created, and wrote papers on, Hadoop and MapReduce, which got reverse-engineered into the current state of the art for Big Data.

But Google has moved on to Dremel, and the rest of the world is slow in catching up.

With BigQuery Google offers a simple-to-user service that doesn’t sacrifice Big Data scale OR speed.

As  Armando Fox, a professor of computer science at the University of California, Berkeley who specializes in these sorts of data-center-sized software platforms. put it in a Wired article:

“This is unprecedented. Hadoop is the centerpiece of the “Big Data” movement, a widespread effort to build tools that can analyze extremely large amounts of information. But with today’s Big Data tools, there’s often a drawback. You can’t quite analyze the data with the speed and precision you expect from traditional data analysis or “business intelligence” tools. But with Dremel, Fox says, you can.

“They managed to combine large-scale analytics with the ability to really drill down into the data, and they’ve done it in a way that I wouldn’t have thought was possible,” he says. “The size of the data and the speed with which you can comfortably explore the data is really impressive. People have done Big Data systems before, but before Dremel, no one had really done a system that was that big and that fast.

“Usually, you have to do one or the other. The more you do one, the more you have to give up on the other. But with Dremel, they did both.”


NextBio, Intel Collaborate to Optimize Hadoop for Genomics Big Data

Image representing nextbio as depicted in Crun...

NextBio and Intel announced today a collaboration aimed at optimizing and stabilizing the Hadoop stack and advancing the use of Big Data technologies in genomics. As a part of this collaboration, the NextBio and Intel engineering teams will apply experience they have gained from NextBio’s use of Big Data technologies to the improvement of HDFS, Hadoop, and HBase. Any enhancements that NextBio engineers make to the Hadoop stack will be contributed to the open-source community. Intel will also showcase NextBio’s use of Big Data.

“NextBio is positioned at the intersection of Genomics and Big Data. Every day we deal with the three V’s (volume, variety, and velocity) associated with Big Data – We, our collaborators, and our users are adding large volumes of a variety of molecular data to NextBio at an increasing velocity,” said Dr. Satnam Alag, chief technology officer and vice president of engineering at NextBio. “Without the implementation of our algorithms in the MapReduce framework, operational expertise in HDFS, Hadoop, and HBase, and investments in building our secure cloud-based infrastructure, it would have been impossible for us to scale cost-effectively to handle this large-scale data.”

“Intel is firmly committed to the wide adoption and use of Big Data technologies such as HDFS, Hadoop, and HBase across all industries that need to analyze large amounts of data,” said Girish Juneja, CTO and General Manager, Big Data Software and Services, Intel. “Complex data requiring compute-intensive analysis needs not only Big Data open source, but a combination of hardware and software management optimizations to help deliver needed scale with a high return on investment. Intel is working closely with NextBio to deliver this showcase reference to the Big Data community and life science industry.”

“The use of Big Data technologies at NextBio enables researchers and clinicians to mine billions of data points in real-time to discover new biomarkers, clinically assess targets and drug profiles, optimally design clinical trials, and interpret patient molecular data,” Dr. Alag continued. “NextBio has invested significantly in the use of Big Data technologies to handle the tsunami of genomic data being generated and its expected exponential growth. As we further scale our infrastructure to handle this growing data resource, we are excited to work with Intel to make the Hadoop stack better and give back to the open-source community.”


NextBio, Intel Collaborate to Optimize Hadoop for Genomics Big Data

Image representing nextbio as depicted in Crun...

NextBio and Intel announced today a collaboration aimed at optimizing and stabilizing the Hadoop stack and advancing the use of Big Data technologies in genomics. As a part of this collaboration, the NextBio and Intel engineering teams will apply experience they have gained from NextBio’s use of Big Data technologies to the improvement of HDFS, Hadoop, and HBase. Any enhancements that NextBio engineers make to the Hadoop stack will be contributed to the open-source community. Intel will also showcase NextBio’s use of Big Data.

“NextBio is positioned at the intersection of Genomics and Big Data. Every day we deal with the three V’s (volume, variety, and velocity) associated with Big Data – We, our collaborators, and our users are adding large volumes of a variety of molecular data to NextBio at an increasing velocity,” said Dr. Satnam Alag, chief technology officer and vice president of engineering at NextBio. “Without the implementation of our algorithms in the MapReduce framework, operational expertise in HDFS, Hadoop, and HBase, and investments in building our secure cloud-based infrastructure, it would have been impossible for us to scale cost-effectively to handle this large-scale data.”

“Intel is firmly committed to the wide adoption and use of Big Data technologies such as HDFS, Hadoop, and HBase across all industries that need to analyze large amounts of data,” said Girish Juneja, CTO and General Manager, Big Data Software and Services, Intel. “Complex data requiring compute-intensive analysis needs not only Big Data open source, but a combination of hardware and software management optimizations to help deliver needed scale with a high return on investment. Intel is working closely with NextBio to deliver this showcase reference to the Big Data community and life science industry.”

“The use of Big Data technologies at NextBio enables researchers and clinicians to mine billions of data points in real-time to discover new biomarkers, clinically assess targets and drug profiles, optimally design clinical trials, and interpret patient molecular data,” Dr. Alag continued. “NextBio has invested significantly in the use of Big Data technologies to handle the tsunami of genomic data being generated and its expected exponential growth. As we further scale our infrastructure to handle this growing data resource, we are excited to work with Intel to make the Hadoop stack better and give back to the open-source community.”


NextBio, Intel Collaborate to Optimize Hadoop for Genomics Big Data

Image representing nextbio as depicted in Crun...

NextBio and Intel announced today a collaboration aimed at optimizing and stabilizing the Hadoop stack and advancing the use of Big Data technologies in genomics. As a part of this collaboration, the NextBio and Intel engineering teams will apply experience they have gained from NextBio’s use of Big Data technologies to the improvement of HDFS, Hadoop, and HBase. Any enhancements that NextBio engineers make to the Hadoop stack will be contributed to the open-source community. Intel will also showcase NextBio’s use of Big Data.

“NextBio is positioned at the intersection of Genomics and Big Data. Every day we deal with the three V’s (volume, variety, and velocity) associated with Big Data – We, our collaborators, and our users are adding large volumes of a variety of molecular data to NextBio at an increasing velocity,” said Dr. Satnam Alag, chief technology officer and vice president of engineering at NextBio. “Without the implementation of our algorithms in the MapReduce framework, operational expertise in HDFS, Hadoop, and HBase, and investments in building our secure cloud-based infrastructure, it would have been impossible for us to scale cost-effectively to handle this large-scale data.”

“Intel is firmly committed to the wide adoption and use of Big Data technologies such as HDFS, Hadoop, and HBase across all industries that need to analyze large amounts of data,” said Girish Juneja, CTO and General Manager, Big Data Software and Services, Intel. “Complex data requiring compute-intensive analysis needs not only Big Data open source, but a combination of hardware and software management optimizations to help deliver needed scale with a high return on investment. Intel is working closely with NextBio to deliver this showcase reference to the Big Data community and life science industry.”

“The use of Big Data technologies at NextBio enables researchers and clinicians to mine billions of data points in real-time to discover new biomarkers, clinically assess targets and drug profiles, optimally design clinical trials, and interpret patient molecular data,” Dr. Alag continued. “NextBio has invested significantly in the use of Big Data technologies to handle the tsunami of genomic data being generated and its expected exponential growth. As we further scale our infrastructure to handle this growing data resource, we are excited to work with Intel to make the Hadoop stack better and give back to the open-source community.”


Qubole Exits Stealth Mode, Introduces Auto-Scaling Big Data Platform

Image representing Hadoop as depicted in Crunc...

Qubole exited stealth mode today to introduce its auto-scaling Big Data platform, “combining the power of Apache Hadoop and Hive with the simplicity of a Cloud platform in order to accelerate time-to-value from Big Data.” Qubole, a Silver Sponsor of next week’s Hadoop Summit 2012 conference, also invites business analysts, data scientists, and data engineers to participate in the Qubole early access program.

While most well known as creators of Apache Hive and long-time contributors to Apache Hadoop, Qubole’s founders Ashish Thusoo and Joydeep Sen Sarma also managed the Facebook data infrastructure team that was responsible for nearly 25PB of compressed data. The data services built by this team are used across business and engineering teams who submit tens of thousands of jobs, queries and ad hoc analysis requests every day. Thusoo and Sen Sarma applied their experiences and learnings to create the industry’s next generation big data platform for the cloud. With Qubole, organizations can literally begin uncovering new insights from their structured and unstructured data sources within minutes.

“We believe a new approach is needed – one that hides the complexity associated with storing and managing data and instead provides a fast, easy path to analysis and insights for business analysts, data scientists and data engineers,” said Joydeep Sen Sarma, Co-Founder of Qubole. “We gained significant experience helping a web-scale company build and manage a complex Big Data platform. We don’t want our customers to worry about choosing a flavor of Hadoop, or spinning up clusters, or trying to optimize performance. Qubole will manage all of that so that users can focus on their data and their algorithms.”

Qubole Auto-Scaling Big Data Platform for the Cloud Benefits Include:

  • Fastest Path to Big Data Analytics –
    Qubole handles all infrastructure complexities behind the scenes so
    users can begin doing ad hoc analysis and creating data pipelines
    using SQL and MapReduce within minutes.
  • Scalability “On the Fly” – Qubole
    features the industry’s first auto-scaling Hadoop clusters so users
    can get the right amount of computing power for each and every project.
  • Fast Query Authoring Tools – Qubole
    provides fast access to sample data so that queries can be authored
    and validated quickly.
  • Fastest Hadoop and Hive Service in the Cloud
    – Using advanced caching and query acceleration techniques, Qubole has
    demonstrated query speeds up to five times faster than other
    Cloud-based Hadoop solutions.
  • Quick Connection to Data – Qubole
    provides mechanisms to work with data sets stored in any format in
    Amazon S3. It also allows users to easily export data to S3 or to
    databases like MySQL.
  • Integrated Data Workflow Engine – Qubole
    provides mechanisms to easily create data pipelines so users can run
    their queries periodically with a high degree of reliability.
  • Enhanced Debugging Abilities – Qubole
    provides features that helps users get to errors in Hadoop/Hive jobs
    fast, thus saving time in debugging queries.
  • Easy Collaboration with Peers – Qubole’s
    Cloud-based architecture makes it ideal for analysts working in a
    geographically distributed environment to share information and
    analysis.

“Companies are increasingly moving to the Cloud and for good reason. Applications hosted in the Cloud are much easier to use and manage, especially for companies without very large IT organizations. While Software as a Service (SaaS) is now the standard for many different types of applications, it has not yet been made easy for companies to use the Cloud to convert their ever-increasing volume and variety of data into useful business and product insights. Qubole makes it much easier and faster for companies to analyze and process more of their Big Data, and they will benefit tremendously,” said Ashish Thusoo, Co-Founder of Qubole.

To join the early access program, please visit www.qubole.com. Qubole is looking to add a select number of companies for early access to its service, with the intention of making the service more generally available in Q4 2012. People interested in seeing a demo of the platform can visit Qubole at the Hadoop Summit June 13 – 14 at the San Jose Convention Center, kiosk #B11.


Actuate, Hortonworks Collaborate to Visualize Big Data

Image representing Actuate as depicted in Crun...

Actuate Corporation, the people behind BIRT and an open source Business Intelligence (BI) vendor, today announced a collaboration between Actuate BIRT and the Hortonworks Data Platform, to enable Big Data visualization. The Hortonworks Data Platform is a completely open source, tightly integrated and tested distribution of Apache Hadoop, backed by extensive customer support and training.

The ActuateOne integrated product suite—built around BIRT—uses native access Hive query to leverage MapReduce functions to extract data from Hadoop, pulling those data sets into customizable BIRT-based dashboards and scorecards for interactive visualization and analysis.

“We have dedicated significant resources to make Apache Hadoop more robust and easier to integrate, extend, deploy and use,” said John Kreisa, VP of Marketing at Hortonworks. “Our partnership with open source BI leader Actuate enables more users to cost effectively analyze vast amounts of data stored in Hadoop using open source technologies.”

“Actuate’s collaboration with Hortonworks will ease the transition from Big Data hype to Big Data usefulness,” said Nobby Akiha, Senior Vice President of Marketing at Actuate. “We believe the key to success with Big Data lies in building the right infrastructure to manage it. Teaming with Hortonworks will further our goal of helping organizations figure out how best to leverage and integrate Big Data sources to enable better decision making.”

Large organizations are increasingly turning to Apache Hadoop for the storage and management of massive amounts of data and thus need scalable ways to explore, analyze and visualize the insights stored within it. The combination of the Hortonworks Data Platform’s distributed processing of Hadoop data sources of any size, with Actuate’s scalable infrastructure and intuitive data visualization capabilities, enables organizations to more effectively operationalize Big Data for thousands of customers, partners and employees.