In supervised learning tasks, if the outcome labels (such as sentiment or behavior categories) are unequally distributed across genders, the AI system may develop biased predictive capabilities. For example, if negative sentiment is more frequently associated with one gender in the training data, the model may unfairly associate that gender with negativity.

In supervised learning tasks, if the outcome labels (such as sentiment or behavior categories) are unequally distributed across genders, the AI system may develop biased predictive capabilities. For example, if negative sentiment is more frequently associated with one gender in the training data, the model may unfairly associate that gender with negativity.

Empowered by Artificial Intelligence and the women in tech community.
Like this article?

Interested in sharing your knowledge ?

Learn more about how to contribute.

Sponsor this category.