Why Can Unlabeled Data Be Useful?
Weakly Supervised Training: Trading Labels for Computation
It is time to sum up the series. Faced with the lack of labeled data in many applications, machine learning researchers have been looking for ways to solve problems without sufficiently large labeled datasets available. We have discussed three basic ideas that have already blossomed into full-scale research directions:
one-shot and zero-shot learning that allow models to quickly extend themselves to new classes with very little new labeled data;
reinforcement learning that often needs no data at all beyond an environment that constitutes the “rules of the game”;
weakly supervised learning that informs supervised models with large amounts of unlabeled data.