Citation Please cite as: @inproceedings{gumma-etal-2023-empirical, title = "An Empirical Study of Leveraging Knowledge Distillation for Compressing Multilingual Neural Machine Translation Models", author = "Gumma, Varun and Dabre, Raj and Kumar, Pratyush", editor = "Nurminen, Mary and Brenner, Ju...
{Jay Gala and Pranjal A Chitale and A K Raghavan and Varun Gumma and Sumanth Doddapaneni and Aswanth Kumar M and Janki Atul Nawale and Anupama Sujatha and Ratish Puduppully and Vivek Raghavan and Pratyush Kumar and Mitesh M Khapra and Raj Dabre and Anoop Kunchukuttan}, journal={...
step of the proposed framework learns a classif i er that jointlyoptimizes precision and recall by only using imperfectly labeled training samples. We also show that, under certain assumptions on theimperfect labels, the quality of this classif i er is almost as good as the one constructed usin...
Citation Please cite as: @inproceedings{gumma-etal-2023-empirical,title="An Empirical Study of Leveraging Knowledge Distillation for Compressing Multilingual Neural Machine Translation Models",author="Gumma, Varun andDabre, Raj andKumar, Pratyush",editor="Nurminen, Mary andBrenner, Judith andKoponen, ...
Citation Please cite as: @inproceedings{gumma-etal-2023-empirical, title = "An Empirical Study of Leveraging Knowledge Distillation for Compressing Multilingual Neural Machine Translation Models", author = "Gumma, Varun and Dabre, Raj and Kumar, Pratyush", editor = "Nurminen, Mary and Brenner, Ju...