Some fairness frameworks employ adversarial networks that actively discourage the model from encoding gender information. During training, an adversary attempts to predict gender from model outputs, and the main model learns to minimize this predictability, thus reducing gender bias in representations.
- Log in or register to contribute
Contribute to three or more articles across any domain to qualify for the Contributor badge. Please check back tomorrow for updates on your progress.