Compressing deep learning models: distillation (Ep.104)

Data Science at Home - Een podcast door Francesco Gadaleta

Categorieën:

Using large deep learning models on limited hardware or edge devices is definitely prohibitive. There are methods to compress large models by orders of magnitude and maintain similar accuracy during inference. In this episode I explain one of the first methods: knowledge distillation  Come join us on Slack Reference Distilling the Knowledge in a Neural Network https://arxiv.org/abs/1503.02531 Knowledge Distillation and Student-Teacher Learning for Visual Intelligence: A Review and New Outlooks https://arxiv.org/abs/2004.05937

Visit the podcast's native language site