Core Risk and Harms Created by AI Systems Due to Bias

Individuals

AI systems can immortalize discriminatory practices that violate people’s civil rights. For instance, Google’s face recognition software identified African Americans as gorillas, Microsoft’s chatbot Tay one day became a neo-Nazi, and various AI systems have classified white men with 1% but dark-skinned women with a whopping 35% (Devillers et al., 2021). Google systems have also displayed racial discrimination in searches for people’s names. Anshari et al. (2023) state that names linked to individuals of black ancestry were highly likely to display arrest-associated information, unlike other racial demographics, irrespective of their involvement in the police arrest. Moreover, Ashraf’s (2022) study affirmed that hate speech AI moderation software is 1.5 times more likely to mark out tweets by black people as offensive and discriminate against other disenfranchised populations with non-Caucasian speech patterns. The Chinese government is employing more comprehensive surveillance founded on biometrics, including facial recognition associated with ethnic minorities. Even experts who believe in algorithms’ capability of spotting hidden discriminatory tendencies attest to the epistemological complexity that may undermine a sound ethical assessment of intricate algorithms, resulting in intensified discrimination issues (Heinrichs, 2022). These harms and discriminatory acts by the AI systems stem from the conscious and unconscious bias by the developers, including cultural background and upbringing, socioeconomic bias, and gender identity through direct programming of the training data.

Furthermore, AI systems can steer and heighten existing biases in organizations’ recruitment processes, adversely impacting some demographics’ economic opportunities. Research shows that even though AI-driven recruitment and hiring processes can significantly improve recruitment quality, decrease transactional work, and enhance efficiency, algorithmic bias creates discriminatory hiring practices grounded on personality features, color, race, and gender (Chen, 2023). An excellent case in point is Amazon’s hiring tool, which can downgrade women’s resumes (Devillers et al., 2021). Google and LinkedIn have also demonstrated an inclination to display less high-paying jobs for searches by women users as they did for male searches. The prevalence of algorithmic bias that limits a specific person’s career prospects spans across industries. In 2018, Amazon tried AI in its quest for automation of recruitment processes, where an algorithm that had been trained on dozens of resumes over several years allowed the selection of the best possible candidates (Albaroudi et al., 2023). However, the corporation discontinued its use following the accusations of prejudice. Thus, the bias associated with AI tools can discourage companies from embracing these technologies and integrating them into their operations.

The training data comprised mainly male applicants’ resumes over ten years. As a result, the algorithm discriminated against female-angled applicants and demonstrated an inclination toward male-centric language patterns (Albaroudi et al., 2023). The model has the potential of locking potentially qualified candidates from various job opportunities. Therefore, algorithmic bias is a significant limiting factor, creating AI systems that overlook qualified personnel based on demographic characteristics.

The age of intelligent technologies, digitization, and the internet has allowed companies to collect vast amounts of data from their users daily. Such companies include Google, Facebook, and Amazon, which have unprecedented access to consumer data. Research shows that “over the last decade, humans have produced each year as much data as was produced throughout the entire history of humankind” (Carnevale et al., 2023, p.829). Digitization goes beyond users’ information to encompass their surrounding environment, including homes and domestic appliances with an internet connection and the ability to make services online. Moreover, recent surveillance and traffic observation programs are turning public spaces digital, with automobiles, facial recognition programs, and CCTVs on the rise. This expansive digitization has redefined the meaning of privacy, probing human rights systems to adapt their structures and face the novel challenges of data protection as a human right. These challenges comprise “non-consensual data collection by consumer products, using AI to identify individuals, AI profiling of individuals based on population-level data, AI-generated inferences of information and identity based on non-sensitive data, and AI decision making” (Ashraf, 2022, p.775). These challenges account for only a portion of the safety risks AI systems present due to individual bias.

Groups

Algorithmic biases in content moderation AI systems disproportionately affect users on social media platforms. Extensive research demonstrates that disenfranchised social media users are more likely to unfairly have their content removed or suppressed across the sites, with populations like Black users, transgender users, LGBTQIA+ users, and women, particularly women of color, being most at risk (Mayworm et al., 2024). For instance, drag queens have faced extensive content removal from X, while Facebook and TikTok have increasingly removed Black users’ posts touching on anti-Blackness (Mayworm et al., 2024). These users get disappointed when they believe the biased content should not have been removed, significantly if such suppression did not breach the platform’s community guidelines. Incredibly frustrated are users who lack knowledge of the intricate workings of algorithmic content removal. Also, appealing systems by these platforms fall short when solving unwarranted content bans, aggravating users’ dissatisfaction. Previously, popular websites such as TikTok, Instagram, and YouTube have been accused of suppressing BIPOC and LGBTQIA+ people’s content through algorithms (Mayworm et al., 2024). Even though content moderation is vital to streamlining posting habits on social media platforms and ensuring the removal of harmful and illegal content, the biased AI algorithms that suppress posts from specific subgroups further discriminatory agendas against disempowered populations.

In an educational setting, algorithmic biases manifest in the evaluation of student work. For example, teachers may assess text-based assignments such as speeches and essays using generative AI to assign periodic grades. Negative biases can stem from AI models that unknowingly perpetuate prejudice and discrimination in students’ work without considering language and cultural disparities (Salazar et al., 2024). Consequently, underrepresented learners may be subject to grading disadvantage, perpetuating educational accomplishment gaps (Mhlanga, 2023). In institutions of higher learning, AI algorithmic biases persist because of internalized unfair grading and performance evaluation patterns, worsening the academic opportunities of marginalized student groups. Gender bias is among the most prevalent algorithmic biases in leasing settings, particularly within language learning programs. For instance, Google Translate decoded the Turkish parallel of “She/he is a nurse” into the feminine form and “She/he is a doctor” into the masculine form (Akgun & Greenhow, 2022). These translations exemplify the gender-specific societal prejudices in AI language translation models. As a result, personalized learning is significantly affected. Decision-making algorithms in educational settings also disproportionately affect students based on gender and racial preferences, limiting numerous students from accessing optimal education because they do not have the preferred characteristics.

Society

Healthcare is among the industries increasingly adopting AI systems to streamline operations and optimize patient outcomes. AI has created massive benefits for healthcare by advancing diagnosis and streamlining treatment and intervention plans (Quinn et al., 2021). These merits notwithstanding, algorithmic bias in healthcare-adopted AI models could result in wrong diagnoses, medication errors, and intentional antagonistic attacks (Ma et al., 2020). If not detected by system defenses, the AI frameworks can also exacerbate cyberattack vulnerabilities, compromising patient data and confidentiality. These adverse repercussions limit public trust in healthcare facilities and AI systems.

Another factor of biased AI systems that lowers the public’s trust in revolutionary technology is the biased decision-making that drives social imbalances. This implicit discrimination contributed to the unjust treatment of underrepresented populations in various contexts. Some of the most bias-embedded settings include healthcare facilities, recruitment processes, unfair loan approvals, and even fatal decisions by self-driving automobiles (Mensah, 2023). The risks to individuals covered the inclination of facial recognition systems to discriminate against people of color. The algorithm that showed less accuracy in identifying people with darker skin tones reflects a more profound troubling reality.

Another troubling instance was Apple Credit Card offering men significantly higher credit lines than women in 2019, where a male entrepreneur obtained a 20-times higher credit limit than his wife, who has a higher credit score (Holweg et al., 2022). Employing biased systems in law enforcement can result in disproportionate targeting of marginalized communities because of their appearance (Mensah, 2023). The impacts of biased AI systems traverse the fields of healthcare, criminal justice, employment, and credit scoring, perpetuating existing inequalities. The unfair outcomes from these prejudices lower the public trust in these systems and associated technological innovations.

Mechanisms of algorithmic political bias create AI systems that impact democratic processes. One of the most prevalent entails using social media platforms during election periods to sway voters in a particular direction. Using social sites to disseminate disinformation and fake news results in a radicalizing effect in weak democratic nations (Schleffer & Miller, 2021). Populist candidates who are against the establishment can use social media to undermine democratic systems and institutions. Additionally, the candidates can employ biased algorithms to drive political marketing inordinately, earmark specified demographics, and control their voting behaviors, skewing the election results.

These algorithmic mechanisms also contribute to automating recommender systems that generate filter bubbles in which people only view information that confirms individual opinions, impacting democracy and, at its center, the electoral process. The right to free and fair elections is among the participatory liberties in which everybody can participate in their respective countries. Also, AI is applicable in the political and election processes, from online voting to the hypothetical concept of completely substituting legislature’s human representative with AI algorithms, creating massive apprehensions regarding their usage and implications. Mainz et al. (2022) claim that access to vast data on electors enables AI to accurately infer individual electors’ past voting behavior, making the secret ballot considerably less effective in safeguarding voters against social punishment and ostracism. For instance, Facebook and Google regularly utilize AI frameworks specifically fashioned to predict individuals’ voting behavior from the links people click online, posts they interact with on social media sites, and the news they read to obtain an accurate representation of the users’ political orientation (Mainz et al., 2022). This information affords the companies and gives political parties an upper hand in influencing election outcomes by exploiting this information, thus destroying the electoral democratic process.

Companies/Institutions

Companies and institutions implementing AI systems to drive their operations and decision-making face substantial legal and reputational dangers. Globally, governments are adopting fundamental policies to regulate AI and algorithmic models (Smuha, 2021). Such regulatory bodies include the United States Federal Trade Commission’s Consumer Financial Protection Bureau, the Office of Technology, the online-based European Center for Algorithmic Transparency, and the Office of Communications in the United Kingdom. These agencies, alongside several others globally, have launched efforts and policies to regulate AI implementation and usage. The agencies seek to learn about algorithmic systems and their impact on society, associated harms, and legal compliance. Despite the fast advancements, this field’s lack of regulations hindered accountability. As a result, alleviating misuse and damaging outcomes from the biased systems became impossible without the set directives, responsibilities, and standards for developers and implementing companies (Mensah, 2023). These inadequacies called for establishing extensive frameworks that oversee the development and implementation of AI systems while fostering privacy rights, fairness, and transparency covered by these agencies.

Additionally, organizations that employ biased systems may face substantial legal ramifications because they lack fairness and transparency. The agencies will ascertain that the algorithmic systems adhere to stringent data collection practices that safeguard user privacy rights. The systems should also be subject to standards concerning non-discrimination and fairness policies to avoid the perpetuation of biases against specific groups. In addition to the legal ramifications, the companies are susceptible to public backlash. Companies that adopt biased systems in their recruitment processes and everyday operations are bound to be put in the spotlight by breaches in consumer data and discrimination against potential candidates. Public criticism can destroy the company’s reputation, resulting in a reduced customer base, increased employee turnover, and erosion of consumer trust.

Besides legal and reputation risks, biased AI systems pose challenges of operational inefficiencies. Biased AI systems can lead to inaccurate and unjust decisions, undermining the efficiency of automated decision-making (Schwartz et al., 2022). Companies rely on vast consumer behavior and market trends datasets to make informed decisions that optimize operations, maximize productivity and customer satisfaction, and maintain a competitive edge. Therefore, AI systems are vital in analyzing these data. However, these systems are susceptible to selection, algorithmic, and confirmation bias. The training data can contain specific subgroups, historical prejudices, or developers’ preconceptions that will influence the systems, functionality, and performance. To this end, companies can rely on non-representative or misguided information to make erroneous or suboptimal decisions that may cost them a consumer segment or cause them to lose their competitive advantage. In a healthcare setting, these biases manifest in inaccurate diagnoses that influence patient outcomes and burden healthcare costs. Thus, biased systems can cause massive operational efficiencies that cause additional costs and damage reputation.

The customer service segment is a vital part of enhancing organizational efficiency. However, biased service chatbots threaten customer service operations through unsatisfactory interactions. For example, some language models discriminate against clients speaking a particular vernacular (Bang et al., 2021). Due to their training data, the models inadvertently reply with offensive or irrelevant responses, leading to respiratory harm and customer discontent. Moreover, failing to properly manage emotional reactions and delicate subjects causes miscommunication in specific automated systems for client support (Chen et al., 2023). These interactions create operational inefficiencies due to customer dissatisfaction, potentially losing lucrative business for the company.

Ecosystems

There is some risk when using artificial intelligence and similar computer programs within the agricultural industry, especially considering that they involve animals and live ecosystems. Even though such a setup helps minimize the use of pesticides and other poison compounds and protects humans who work on the farm (Rodzalan et al., 2020), this kind of technology can cause great suffering among farm animals. According to Gardezi & Stock (2021), few farmers express worries about using AI bots. Similar security uncertainty may arise for high robotic and automatic farms, where imminent risks include hacking, sabotage, and corporate spying (Carolan, 2020). These concerns point to a significant challenge that mandates immediate intervention to ensure no harm during AI deployment in the field. Also, algorithmic bias may create a preference for some crops over others. The primary markets for agricultural AI tools are Europe, the United States, and possibly China (Sparrow et al., 2021). There is a risk that farmers from other regions will receive models trained on soil chemistry and crop yield data in the primary areas, making them highly challenging in different regions’ local contexts (Sparrow et al., 2021). Attempts at amending this issue by retraining the frameworks using data from the respective areas will still encompass a preference for some crops.

While the data obtained from the AI system may enhance trust, humans still struggle to trust the systems due to their lack of emotive states (Rose et al., 2021). Forging trust within the agricultural sector is an uphill task. That is because the data collected by the autonomous AI-powered machines deployed on the farms is sent to the agribusiness, substantially unknown to the farmer (Stock & Gardezi, 2021). There is a danger that in some AI variations, particularly those entailing machine learning algorithms, the farmers might fail to comprehend the system’s decisions and, hence, what might make unexpected decisions. This characteristic raises practical questions regarding the proper conditions to trust the models and their responsibilities. For instance, how should a farmer act if an AI system proposes a solution that is opposite the farmer’s judgment? (Sparrow et al., 2021). The dilemma surrounding when to trust AI systems increases the risk of accidents, compromising natural resources.

Despite the apparent benefits to value chain efficiencies, integrating AI in agricultural supply chains can amplify inequities in resource distribution. AI models provide enhanced analytics and decision support systems that augment and rationalize supply chain processes, improving efficiency and sustainability (Gikunda, 2024). However, biased systems may fail to account for the socio-cultural context of the specific agricultural communities, significantly impacting the end users. Besides, prejudiced predictive logistics algorithms may favor particular areas, creating supply chain and demand disparities. The issue extends to precision agriculture and the actors in the supply chain. The precision agriculture mechanism poses ethical and social issues, such as potential trust and privacy issues among farmers and other players in the food system (Stock & Gardezi, 2021). The AI systems gather farmers and produce data to streamline precision agriculture and resource distribution. However, the biased systems aggravate the lack of trust between the farmers and the other actors in the food system, mainly because the farmers have little to no control over the usage of collected data. Therefore, this ambiguity in the supply chain results in disruptions, inequity in resource distribution, and preference for specific regions, exacerbating economic disparities.

 


References

Akgun, S., & Greenhow, C. (2022). Artificial intelligence in education: Addressing ethical challenges in K-12 settings. AI and Ethics2(3), 431-440. https://doi.org/10.1007%2Fs43681-021-00096-7

Albaroudi, E., Mansouri, T., & Alameer, A. (2024). A Comprehensive Review of AI Techniques for Addressing Algorithmic Bias in Job Hiring. AI5(1), 383-404. https://doi.org/10.3390/ai5010019

Anshari, M., Hamdan, M., Ahmad, N., Ali, E., & Haidi, H. (2023). COVID-19, artificial intelligence, ethical challenges, and policy implications. Ai & Society38(2), 707-720. https://doi.org/10.1007/s00146-022-01471-6

Ashraf, C. (2022). Exploring the impacts of artificial intelligence on freedom of religion or belief online. The International Journal of Human Rights26(5), 757-791. https://doi.org/10.1080/13642987.2021.1968376

Bang, J., Kim, S., Nam, J. W., & Yang, D. G. (2021, August). Ethical chatbot design for reducing negative effects of biased data and unethical conversations. In 2021 International Conference on Platform Technology and Service (PlatCon) (pp. 1-5). IEEE. https://doi.org/10.1109/PlatCon53246.2021.9680760

Carnevale, A., Tangari, E. A., Iannone, A., & Sartini, E. (2023). Will Big Data and personalized medicine do the gender dimension justice? ai & Society38(2), 829-841. https://doi.org/10.1007/s00146-021-01234-9

Carolan, M. (2020). Automated agrifood futures: Robotics, labor and the distributive politics of digital agriculture. The Journal of Peasant Studies47(1), 184-207. https://doi.org/10.1080/03066150.2019.1584189

Chen, P., Wu, L., & Wang, L. (2023). AI fairness in data management and analytics: A review on challenges, methodologies, and applications. Applied sciences13(18), 10258. https://doi.org/10.3390/app131810258

Chen, Z. (2023). Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanities and Social Sciences Communications10(1), 1-12. https://doi.org/10.1057/s41599-023-02079-x

Devillers, L., Fogelman-Soulié, F., & Baeza-Yates, R. (2021). AI & human values: Inequalities, biases, fairness, nudge, and feedback loops. Reflections on Artificial Intelligence for Humanity, 76-89. https://doi.org/10.1007/978-3-030-69128-8_6

Gardezi, M., & Stock, R. (2021). Growing algorithmic governmentality: Interrogating the social construction of trust in precision agriculture. Journal of Rural Studies84, 1-11. https://doi.org/10.1016/j.jrurstud.2021.03.004

Gikunda, K. (2024). Harnessing Artificial Intelligence for Sustainable Agricultural Development in Africa: Opportunities, Challenges, and Impact. arXiv preprint arXiv:2401.06171. https://doi.org/10.48550/arXiv.2401.06171

Heinrichs, B. (2022). Discrimination in the age of artificial intelligence. AI & society37(1), 143-154. https://doi.org/10.1007/s00146-021-01192-2

Holweg, M., Younger, R., & Wen, Y. (2022). The reputational risks of AI. California Management Review Insights.

Ma, X., Niu, Y., Gu, L., Wang, Y., Zhao, Y., Bailey, J., & Lu, F. (2021). Understanding adversarial attacks on deep learning-based medical image analysis systems. Pattern Recognition110, 107332. https://doi.org/10.1016/j.patcog.2020.107332

Mainz, J. T., Sønderholm, J., & Uhrenfeldt, R. (2022). Artificial intelligence and the secret ballot. AI & SOCIETY, 1-8. https://doi.org/10.1007/s00146-022-01551-7

Mayworm, S., DeVito, M. A., Delmonaco, D., Thach, H., & Haimson, O. L. (2024). Content moderation folk theories and perceptions of platform spirit among marginalized social media users. ACM Transactions on Social Computing7(1), 1-27. https://doi.org/10.1145/3632741

Mensah, G. B. (2023). Artificial Intelligence and Ethics: A Comprehensive Review of Bias Mitigation, Transparency, and Accountability in AI Systems. http://dx.doi.org/10.13140/RG.2.2.23381.19685/1

Mhlanga, D. (2023). Open AI in education, the responsible and ethical use of ChatGPT towards lifelong learning. In FinTech and Artificial Intelligence for Sustainable Development: The Role of Smart Technologies in Achieving Development Goals (pp. 387-409). Cham: Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-37776-1_17

Quinn, T. P., Senadeera, M., Jacobs, S., Coghlan, S., & Le, V. (2021). Trust and medical AI: the challenges we face and the expertise needed to overcome them. Journal of the American Medical Informatics Association28(4), 890-894. https://doi.org/10.1093%2Fjamia%2Focaa268

Rodzalan, S. A., Ong, G. Y., & Mohd Noor, N. N. (2020). A foresight study of artificial intelligence in the agriculture sector in Malaysia. International Journal of Advanced Science and Technology29(6), 447-462. http://eprints.uthm.edu.my/id/eprint/6610

Rose, D. C., Lyon, J., de Boon, A., Hanheide, M., & Pearson, S. (2021). Responsible development of autonomous robotics in agriculture. Nature Food2(5), 306-309. https://doi.org/10.1038/s43016-021-00287-9

Salazar, L. R., Peeples, S. F., & Brooks, M. E. (2024). Generative AI Ethical Considerations and Discriminatory Biases on Diverse Students Within the Classroom. In The Role of Generative AI in the Communication Classroom (pp. 191-213). IGI Global. http://dx.doi.org/10.4018/979-8-3693-0831-8.ch010

Schleffer, G., & Miller, B. (2021). The Political Effects of Social Media Platforms on Different Regime Types. Texas National Security Review (Summer 2021). http://dx.doi.org/10.26153/tsw/13987

Schwartz, R., Schwartz, R., Vassilev, A., Greene, K., Perine, L., Burt, A., & Hall, P. (2022). Towards a standard for identifying and managing bias in artificial intelligence (Vol. 3, p. 00). US Department of Commerce, National Institute of Standards and Technology.

Smuha, N. A. (2021). From a ‘race to AI’to a ‘race to AI regulation’: regulatory competition for artificial intelligence. Law, Innovation and Technology13(1), 57-84. https://doi.org/10.1080/17579961.2021.1898300

Sparrow, R., Howard, M., & Degeling, C. (2021). Managing the risks of artificial intelligence in agriculture. NJAS: Impact in Agricultural and Life Sciences93(1), 172-196. https://doi.org/10.1080/27685241.2021.2008777

Stock, R., & Gardezi, M. (2021). Make bloom and let wither: Biopolitics of precision agriculture at the dawn of surveillance capitalism. Geoforum122, 193-203. https://doi.org/10.1016/j.geoforum.2021.04.014

Previous
Previous

The Advantage of Multimodal AI In Healthcare

Next
Next

Overview of Microsoft Azure Service Fabric