So, you bought into the current machine learning craze and went on to collect millions/billions of records from this promising new data source. Now, what do you do with them? Too often, the abundance of data quickly turns into an abundance of problems. How do you extract that “magic essence” from your data without falling into the common pitfalls?
In her session at @ThingsExpo, Natalia Ponomareva, Software Engineer at Google, provided tips on how to be successful in large scale machine learning. She briefly reviewed the frameworks available to train machine learning models on large amounts of data, touched on feature engineering and algorithm selection, and gave a few tips to help you avoid the most common mistakes.