In supervised learning tasks, if the outcome labels (such as sentiment or behavior categories) are unequally distributed across genders, the AI system may develop biased predictive capabilities. For example, if negative sentiment is more frequently associated with one gender in the training data, the model may unfairly associate that gender with negativity.
- Log in or register to contribute
Contribute to three or more articles across any domain to qualify for the Contributor badge. Please check back tomorrow for updates on your progress.