Neural networks have been highly influential in the past decades in the machine learning community, thanks to the rise of computing power, the abundance of unstructured data, and the advancement of algorithmic solutions. However, it is still a long way for researchers to completely use neural networks in real-world settings where the data is scarce, and requirements for model accuracy/speed are critical.
Meta-learning , also known as learning how to learn , has recently emerged as a potential learning paradigm that can absorb information from one task and generalize that information to unseen tasks proficiently. During this quarantine time, I started watching lectures on Stanford’s CS 330 class on Deep Multi-Task and Meta-Learning taught by the brilliant Chelsea Finn . As a courtesy of her talks, this blog post attempts to answer these key questions:
Why do we need meta-learning? How does the math of meta-learning work? What are the different approaches to design a meta-learning algorithm? 链接地址： https://medium.com/cracking-the-data-science-interview/meta-learning-is-all-you-need-3bd0bafdf289