Debiasing large pretrained language models using distributional control

XEoK5hH.png

Large Language Models such as GPT3 are trained on large uncurated text from the internet. Despite their huge success and emergent properties such as in-context learning, they suffer a lot from inherent biases and toxicities that could lead to the generation of harmful content. In this blog post, we talk about a novel framework for controlled natural language generation that has been recently published in ICLR2021. Generation with Distributional Control, which achieves great generality on the types of constraints that can be imposed and has a large potential to remedy the problem of bias in language models.

Continue reading on Naver’s blog.

References:

A distributional approach to controlled text generation. Muhammad Khalifa, Hady Elsahar and Marc Dymetman. Proceedings of the 9th International Conference on Learning Representations (ICLR), virtual conference, 4–7 May 2021. [Paper] [Code]

Previous
Previous

Energy-Based Models for Code Generation under Compilability Constraints

Next
Next

A Distributional Approach To Controlled Text Generation