As I’ve recently written, people are throwing around the term Big Data as if they’re speaking of duct tape—you can use it here, you can use it there, you can use it anywhere.
This promiscuous use of the term has led to some amusing “findings” in today’s headlines—eg, “60% of companies expect to invest in Big Data dramatically within three years” while at the same time “85% of Fortune 500 companies unable to leverage Big Data for competitive advantage.”
The latter quote is from Gartner, so is easily discounted—besides, there should be an automatic penalty of having to watch The View on a continuous loop for a year whenever one utters the phrase “leverage for competitive advantage.” The first of the two quotes is one of those wildly general, unfocused nonsense predictions that mean nothing. The use of the word “expect” makes it even less convincing.
So how do we restore some order here? Perhaps a little checklist can help.
Are you considering Big Data for a project, an initiative, or a makeover?
Are you with a Fortune 500 or similarly large company?
Is your data new and unstructured? How much do you know about Hadoop?
Is your data traditional and structured? How much of it do you have?
Are you allowed to go outside the company for public-cloud instances? (See New York Times and its creation of 11 million pdfs during one large public-cloud session.)
How many of your current vendors are touting cloud computing? How many are touting Big Data? How many of them did so at your prompting, rather than theirs?
What’s your timeframe?
What’s your budget for using new resources, even if they’re rented, cloud resources?
Why are you considering Big Data in the first place? Have you seen an answer to a longstanding problem, or are you just reading too many articles about it?
I think it’s fair to say Big Data can run anywhere from a gigabyte on up, depending on the size of your organization, your problem, and your previous ability to solve your problem. This statement may sound horrifying, as Big Data has until recently been the province of people who work with hundreds of terabytes and dozens of petabytes.
Facebook claims 100 petabytes (ie, 100 million gigabytes) in its Hadoop-driven repository—the mind boggles at the uselessness of that information for anything other than hawking targeted potpourri. You will likely have much less data, but perhaps at least as serious a use for it.
In any case, I never trust predictions of “60% of this” or “85% of that” or whatnot. We are all as unique as the challenges we face. We can perhaps validate our concerns by reading of other people’s problems and solutions, but can only learn about working this stuff best by doing it on our own, whether we’re part of the 60 percent, the 85 percent, the 99 percent, or the 1 percent.