PyTorch is great framework to create deep learning models and pipelines. Nevertheless, for all its merits, it could use improvements in terms of writing training loops, validating and testing neural networks. Moreover, PyTorch users are likely to introduce more bugs during the research and development process as they mix in complicated things like mutli-GPU, mixed precision, and distributed training.
For real breakthroughs in deep learning, we need a strong foundation. In this blog post, I would like to introduce Catalyst framework, developed with focus on reproducibility, fast experimentation and code/idea reusing. We’ll also provide a tutorial on MNIST classification problem as an example.
What Catalyst is? What is it for? And who are the maintainers?
Catalyst is PyTorch framework for Deep Learning research and development. You get a training loop with metrics, model checkpointing, advanced logging and distributed training support without the boilerplate.