Abstract
The growing popularity and scope of application of artificial intelligence models requires attention to be paid to their reliability and the explainability of the reasons behind the results and recommendations they generate. Widespread automation driven by the development of AI may have serious social consequences, and data bias may be the cause of unethical and socially unacceptable tendencies in models. This can manifest itself both at the stage of data collection and processing, but also during model training or implementation. It is the duty of the scientific community not only to pay special attention to exposing the biases of models, but above all to track down unacceptable implementations and strive to take into account the social consequences of the technological solutions being developed. In this study, we will look not only at the causes of bias, but also at examples with significant social implications and measures aimed at eliminating bias and its social consequences.
Metrics
References
- Buolamwini, J., & Gebru, T. (2018, January). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency (pp. 77-91). PMLR.
- Dastin, J. (2022). Amazon scraps secret AI recruiting tool that showed bias against women. In Ethics of data and analytics (pp. 296-299). Auerbach Publications. DOI: https://doi.org/10.1201/9781003278290-44
- Dozens of Companies Are Using Facebook to Exclude Older Workers From Job Ads, (2017) https://www.propublica.org/article/facebook-ads-age-discrimination-targeting
- Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453. DOI: https://doi.org/10.1126/science.aax2342
- Rickman, S. Evaluating gender bias in large language models in long-term care. BMC Med Inform Decis Mak 25, 274 (2025). https://doi.org/10.1186/s12911-025-03118-0 DOI: https://doi.org/10.1186/s12911-025-03118-0
- Yu, Y., & Saint-Jacques, G. (2022). Choosing an algorithmic fairness metric for an online marketplace: Detecting and quantifying algorithmic bias on LinkedIn. arXiv preprint arXiv:2202.07300.
