How AI Impacts Society and Ethical AI guidelines
Introduction
In recent years, artificial intelligence (AI) has rapidly evolved from a futuristic concept to one of the most transformative technologies. Today, AI is a core technology in various applications such as navigation systems, recommendation algorithms, text generation, image recognition, and others. Regardless of these developments, the widespread adoption of AI with limited regulations has raised concerns regarding their ethical implications and societal impacts. Issues ranging from algorithm bias, social injustices, privacy, and discrimination have been identified in AI systems. Furthermore, AI still remains a black box whereby there is limited understanding of the internal workings of AI systems. Considering these issues, there is a growing effort to prevent AI systems from reinforcing or exacerbating unethical vices. Therefore, the goal of this paper is to explore the key ethical issues of AI and their impacts on society. Furthermore, the paper examines the existing AI guidelines before making recommendations to mitigate the ethical risks of AI.
Key Ethical Issues of AI
In the context of AI, there is a need to develop, deploy, and use systems that are morally acceptable and socially beneficial. From the literature, multiple ethical issues have been associated with AI. These issues are multifaceted and complex, often cutting across social, political, and economic dynamics. As such, there is no exhaustive list of issues that can be associated with AI. In a comprehensive review of the literature, Prem (2023) identified 115 ethical issues under the categories of privacy, fairness and bias, explainability, accountability and transparency, correctness and accuracy, diversity, robustness, and reproducibility. In similar research, Heyder et al. (2023) identified six categories of ethical issues in AI including transparency and accountability, privacy and maleficence, justice and fairness, beneficence and sustainability, responsibility, and autonomy, and humanity. Overall, while researchers might differ in the formulation of the ethical issues identified, the ultimate outcomes tend to be compatible. As such, this paper adopts the categories found in most of the studies, which include transparency and explainability, fairness and bias, maleficence and beneficence, and privacy.
Transparency and explainability. Transparency and accountability stem from the way that AI systems are structured. Most of the AI systems are complex and opaque, which makes it difficult to understand the decision-making process (Kempt et al., 2023; Heilinger, 2022). Lack of transparency is a major issue because it hinders accountability and the ability to identify and rectify bias. Building transparent systems could be crucial in understanding, scrutinizing, and regulating the development and deployment of AI systems. Another key issue related to transparency is explainability, which relates to the inability to get clear explanations of AI systems (Akinrinola et al., 2024). When a system is not transparent or explainable, it is difficult to attribute responsibility. For critical systems such as those used in healthcare, explainability, and transparency could be critical in ensuring the decisions of AI can be reviewed to avert negative outcomes.
Fairness and bias. Effective AI systems need to ensure that they do not amplify or perpetuate biases. From the literature, fairness is mainly reflected through bias toward certain groups. For instance, Zembyla (2023) considers fair AI as that that does not systematically discriminate against individuals or groups based on their gender or race. Memarian & Doleck (2023) also consider fairness in relation to prejudice or favoritism to individuals based on acquired or inherent characteristics. One of the key causes of the fairness issue is the data used in training AI systems. According to Ungerer & Slade (2022), AI systems rely on real-world data to gain knowledge. As such, the systems are likely to reflect real-life patterns in the decision-making process, including discrimination, social injustices, and bias. Finally, since the existing systems might fail to account for diversity during the training period, there is the potential to make skewed assumptions regarding some groups or individuals. Overall, it is important to ensure fairness in both data collection, algorithm development, and deployment to limit bias in AI.
Privacy. Control over personal information is also another key issue in AI. The power of AI is mainly derived from the availability of massive amounts of data in digital form. As such, AI algorithms rely on the data to identify patterns, which can then be used to support decision-making. To ensure high levels of accuracy and reliability, AI systems such as those in healthcare rely on personal information for training purposes. Since there is limited understanding of how this data is used, there is potential for data breaches or misuse of data in AI systems (Hermann, 2023). This is evident from the development of ChatGPT whereby OpenAI used private data from various organizations without their consent (Guleria et al., 2023). A key challenge is that since AI systems are opaque, it is difficult to provide informed consent because most organizations have limited knowledge of how the data is used. As such, privacy still remains a major problem in the development of AI systems.
Maleficence and beneficence. Drawing on the principles of maleficence and beneficence, AI systems are expected to result in minimal harm while maximizing societal benefits. Such a system should be safe, fair, correct, reproducible, and robust. Beneficence relates to the sustainability, well-being, and common good of AI systems, while beneficence relates to minimizing harm by ensuring that AI systems operate within specific guard rails (Richards et al., 2023). From the literature, harm can be defined as the violation of policy or regulations, physical harm, and discrimination (Jobin et al., 2019). Beneficence, on the other hand, is reflected through the promotion of human well-being, the creation of socio-economic opportunities, and flourishing peace and happiness. These two ethical principles should form the foundation for AI systems in order to drive human civilization.
Risks and Harms of AI
If organizations implement AI systems without addressing the identified ethical issues, various risks, and harms are likely to surface. From the literature, the risks of AI have been classified into various groups. For instance, Weidinger et al. (2021) classified the risks into discrimination, exclusion and toxicity, information hazards, misinformation harms, malicious use, human-computer interaction harms, and automation, access, and environmental harms. Building on these categories, this research groups the risks and harms into four categories including individual, group, societal, and corporate risks/harms. This classification is adopted to accommodate the wide range of risks and harms associated with AI.
Individual risks and harms. AI systems are often developed and optimized to mirror the application setup. For instance, large language models are designed to accurately mimic the statistical patterns of natural languages (Weidinger et al., 2021). Similarly, healthcare AI systems are designed to exactly mirror the decision-making process in medical facilities. As such, if the data used to train the AI systems is unfair or discriminatory, the resulting algorithms are also the same. In addition to discriminatory or biased data, poor representation or misrepresentation of some individuals can also contribute to issues in AI systems. Other causes of individual risks and bias include feedback loops, sample size disparity, and limited features of minority groups (Koshiyama et al., 2024). Leaking or inferring personal information is also considered a key risk in AI. In such cases, the information can be used to harm people through actions such as tracking their location, impersonation, or identity theft. Overall, there is a potential for AI systems to perpetuate harmful biases and stereotypes or even endanger the lives of individuals.
Group risks and harms. At the group level, risks and harms mainly relate to malicious uses of AI. For instance, the collection of vast amounts of data by governments has raised the potential for censorship and surveillance (Yang, 2024). Previously, analyzing such vast amounts of data required several skilled human analysts (Gu et al., 2024). However, AI has made it possible to automate the data analysis process and draw insights within minutes. Owing to this potential, malicious users or governments can use IA for mass surveillance or censorship. In politics, governments can use AI to identify dissents and target censorship. In addition to the curtailment of human rights through censorship, AI technologies such as facial recognition can infringe on the privacy rights of individuals. When such technologies are employed for mass surveillance, they can lead to a chilling effect (modification of behavior) on free speech and association (Karpa et al., 2022). By controlling speech and association, governments can worsen social insecurities. For instance, minority groups are unlikely to participate in public protests or online advocacy due to the fear of discriminatory targeting by law enforcement agencies.
Societal risks and harms. At the societal level, AI systems can be associated with misinformation risks or malicious uses. Misinformation risks entail the provision of misleading or fake claims by AI systems (Weidinger et al., 2021). For instance, large language models have been found to pass fake or misleading information as factually correct (Dhuliawala et al., 2023). Since the underlying algorithms provide correct information in most instances, there is potential to overly trust the systems. This increases the risk of relying on unreliable information when the systems do not produce accurate output. When the level of misinformation is high, there is potential to amplify societal distrust. In critical sectors such as healthcare, transportation, and security, misinformation can cause physical harm. For instance, the use of false traffic information in a foreign country can lead to an accident (Weidinger et al., 2021). In healthcare, the use of false or misleading information can lead to misdiagnosis or inaccurate dosages (Weidinger et al., 2021; Wu et al., 2022). Finally, in the security sector, false or misleading information can lead to unsafe operation of autonomous systems or exaggeration of ambiguous information, thereby increasing instability (Johnson, 2022). Considering these outcomes, it can be suggested that societal risks and harms are more severe than individual and group risks/harms.
Corporate risks and harms. In addition to general risks and harms, AI can introduce risks and harms to business organizations. According to Koshiyama et al. (2024), organizations are increasingly becoming concerned about the potential effect of AI on reputational and financial damage. This outcome is particularly relevant to AI systems, which support critical decisions. For instance, medical facilities, which have been found to offer wrong diagnoses due to reliance on AI are likely to receive reduced patient flow. Regarding finance, the failure of AI systems to address issues such as privacy could lead to heavy regulatory fines when data breaches occur. In addition to reputational damage, AI is impacting organizational cultures. For instance, reliance on automated software development systems requires a major cultural shift, which could be difficult to implement (Vemuri & Venigandla, 2022). Overall, organizations need to effectively manage AI risks to minimize potential reputational and financial damage or poor cultural fit.
AI Ethical Guidelines
To handle ethical issues and minimize the risks and harms of AI, various interest groups have developed guidelines for the development and use of AI systems. From the literature, over 100 AI ethical guidelines have been proposed. These guidelines aim to offer guidance for addressing each of the ethical principles or issues identified in the literature. In a systematic review by Jobin et al. (2019), 84 AI guidelines were reviewed and transparency was identified as the most dominant issue (available in 73 of the 84 guidelines. Justice and fairness (68/84), non-maleficence (60/84), responsibility (60/84), privacy (47/84), and beneficence (41/84) were also identified as major codes in AI guidelines. Hagendorff (2020) focused on the most recent guidelines on issues and normative stances of AI while excluding national guidelines and specialized documents. Out of the 22 guidelines included in the reviews, privacy, accountability, and fairness were found to be present in approximately 80% of the guidelines.
Various key aspects can be identified from the existing AI guidelines. For instance, Hagendorff (2020) noted that various technical guidelines have been proposed to address fairness and discrimination, accountability and explainability, privacy, justice, and bias. These issues can easily be formulated mathematically, and hence easy to define technical solutions. Another key point to note is that while most of the guidelines identify the existence of technical solutions, only a few of them offer genuine technical explanations. Another key observation is that while most of the existing guidelines are developed at the national level, two of them – the European Commission Ethics Guidelines for Trustworthy AI and the Organization for Economic Co-operation and Development (OECD) Principles of AI – have been developed to cover a wider jurisdiction (Hagendorff, 2020). As such, the two guidelines are discussed further in this paper.
The EU AI ethical guidelines were established in 2018 with a focus on three areas. First, the guidelines specified that AI systems should legally comply with laws and regulations such as the General Data Protection Regulation (GDPR), the Charter of Fundamental Rights, and directives against discrimination. Second, the guidelines specified that AI systems should be robust to avoid unintentional harm. Finally, the guidelines specified that AI systems should comply with ethical standards, including prevention of harm, respect for human autonomy, explicability, and fairness (Larsson, 2020). Before an AI system is deployed, it should be assessed under seven areas including human agency and oversight, privacy and data governance, accountability, diversity, non-discrimination and fairness, societal and environmental wellbeing, transparency, and technical robustness and safety. Overall, the EU guidelines offer a comprehensive framework for building trust and developing systems, which can positively impact society.
The OECD member countries established AI guidelines in 2019 (updated in 2024) to enable member countries to implement effective and safe AI systems. The guidelines focused on five key principles, including accountability, robustness, security and safety, transparency and explainability, inclusive growth, sustainable development and wellbeing, and human rights and democratic values (including privacy and fairness). One of the key benefits of the OECD guidelines is that they offer practical recommendations for policymakers. However, the guidelines do not offer extensive technical measures for ensuring ethical AI. Although the OECD guidelines are non-binding, they were the first to be adopted even by non-OECD countries such as Romania, Argentina, and Brazil (Carter, 20200. This wide adoption rate demonstrates the relevance of the guidelines in building ethical AI systems.
AI Regulations
One of the key points to note is that the existing ethical AI guidelines are not binding. As such, there are growing calls to develop appropriate regulations to mitigate the risks and harms of AI. Regardless of this push, challenges still remain. For instance, Reddy et al. (2020) noted that AI algorithms auto-update based on feedback from previous actions. These updates might go beyond the previously approved actions. Another key issue relates to the lack of explainability of AI algorithms (Reddy et al., 2020). In this case, it is difficult to attribute responsibility when harm is caused. The autonomy of AI systems is also a key challenge in attributing responsibility or implementing regulations (Wong, 2020). As AI systems continue to learn and evolve, it will ultimately imply that regulations could tend to be reactive to emerging issues rather than protective. However, adherence to specific regulations such as the EU GDPR, OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data, and Fair Information Practice (FIPS) could provide a foundation for regulating AI systems.
Foundational Controls of AI
Although multiple factors impede efforts to address ethical issues in AI, there are some fundamental controls, which are necessary. These controls refer to the implementation of appropriate constants on the behavior of an AI algorithm. One of the key controls aims to ensure that AI systems are accountable. In this case, tools should be developed for auditing AI systems to ensure that they comply with organizational policies, industry guidelines and standards, and laws and regulations (Raji et al., 2020). Auditing can reveal disparities in error rates, interaction failures, and other measures, which reveal the presence of unethical behavior. It is also important to consider sector-specific approvals for AI systems. For instance, the United States Food and Drug Administration (FDA) has a comprehensive approach to reviewing and approving medical software. Such reviews can ensure that the proposed AI systems function as required without any unethical practices.
In addition to auditing, organizations need to aim for ethical design and development of AI systems. For instance, there is a need to develop transparent and explainable AI systems using tools ranging from direct measurement to visualization tools (Reddy et al., 2020). Another key area is fairness and bias. In this case, organizations should establish standards that ensure that traditional ethical principles are not overlooked. For instance, AI input data and algorithms should be used in such a way as not to exacerbate or discriminate against minority groups. This can be achieved through an emphasis on inclusion and sampling bias. Finally, unless an organization is using a public dataset, consent should be emphasized in the development of AI systems (Neri et al., 2020). Similar to research involving human subjects, guidelines should be established to guide the collection, use, protection, and destruction of personally identifiable information. To this end, regulations such as EU GDPH should be used to offer guidance.
At the lowest level, there is a need to implement technical controls to enforce ethical AI practices. The justification for technical controls is that accurate and fair models are unlikely to be developed from faulty data selection, poor algorithm design, or inappropriate calculation methods. For instance, the training data for AI algorithms should reflect the social, political, and economic setup of the target population. However, as noted by Mittelstadt (2019), if ethical principles are not considered in the design and development, even the best controls are unlikely to yield any positive outcomes. As such, the technical controls implemented should be focused on mitigating the ethical issues identified in this paper.
Creating a Culture of Ethical AI
Although ethical guidelines, regulations, and controls can be effective in mitigating ethical AI issues, the ultimate goal of any organization is to create an ethical culture in terms of how AI systems are designed, developed, and deployed. One of the key starting points is the establishment of appropriate governance structures (Eitel-Porter, 2021). The development of a governance structure commences with the selection of appropriate ethical principles, which align with the unique values, goals, and vision of an organization. After the identification of the ethical principles, the next step is to develop an ethical board or committee responsible for the reviews of AI use cases and strategies. The committee can also create processes guiding the decision-making process for responsible and safe AI. Finally, it is important to come up with a framework for attributing responsibility to specific teams.
In addition to the overall AI governance structure, there is a need for appropriate data governance. Data governance relates to how the data used to train and validate AI algorithms is acquired and prepared (Eitel-Porter, 2021). Data governance should cover the elements of the data model, appropriate use of data (checking for bias, diversity, and fairness), and transparency (explainability). During training, effort should be devoted to creating an audit trail of the data model and its evolution so that auditing activities can be carried out. Finally, continuous monitoring could be essential in ensuring that the data model aligns with the defined ethical parameters even after deployment.
Privacy and cybersecurity are other key factors to consider in creating a culture of ethical AI. According to Hu (2024), AI has rapidly increased the need for data protection, data privacy, and cybersecurity due to its potential impact on national security and civil rights. For data privacy, there is a need for organizations to establish a framework, that enables users to understand the data they share and how the data is used. Furthermore, users should have control over how the data is used or shared across different organizations. The goal of such approaches is to design AI systems, which offer visibility and control to users throughout the data lifecycle.
There is also a need to develop a culture, which prioritizes transparency and explainability. According to Balasubramaniam et al. (2022), building transparent and explainable systems is essential for building trust with users. An AI system is considered explainable if the developers can describe its purpose, the role of AI, data inputs and outputs, the behavior of the algorithm, and the limitations of the system. The most important aspect of an explainable system is understanding how the inputs are translated into outputs (Ehsan et al., 2021; Balasubramaniam et al., 2022). Since most AI systems are considered black boxes, it can be suggested previous systems were not designed to be explainable. However, it is important to emphasize the aspect of transparency and explainability as part of the culture so that organizations can build trust and boost the use of future AI systems.
Equitable design is also a key aspect, which should be prioritized in the development of AI systems. The potential for AI to aggravate societal inequalities is one of the major concerns in the development of AI systems. For instance, some AI systems have predicted that Black people are more likely to recommit a crime than White people, yet statistics show that they are half as likely (Gurevich et al., 2023). Such outcomes show that AI systems are likely to reinforce existing stereotypes against minority groups. To prevent such outcomes, AI systems should be designed and used in such a way as to eliminate inequalities based on age, race, ethnicity, or sex. Equitable design should cover the choice of data used, the development of the data model, and the selection of the data processing algorithm. The ultimate goal is to ensure that AI systems demonstrate superior performance with minimal inequalities among groups.
Finally, organizations need to develop a culture of legal and regulatory compliance. As noted in the literature, there are no standard guidelines for developing ethical AI systems. Rather, organizations need to consider a wide range of guidelines and ethical principles to identify the most relevant ones. However, guidelines such as those developed by the OECD have proven to be popular because they address most of the most pressing ethical issues, including political manipulation. As such, organizations should base the development of their AI systems on the guidelines to minimize the probability of unethical outcomes. It is also important to prioritize legal legislation such as the EU GDPR in addressing issues relating to privacy.
Conclusion
The objective of this paper was to explore key ethical issues of AI and their impact on society. Based on the review of the literature, multiple issues have been identified, including transparency and explainability, fairness and bias, maleficence and beneficence, and privacy. These issues can result in various risks and harm at the individual, group, societal, and corporate levels. Various guidelines and regulations have been proposed to address the identified ethical issues. However, apart from the OECD and the EU regulations, most of them are developed at the national level. Regulations have also been outpaced by AI development, which makes it necessary to consider other solutions, including the implementation of foundational controls and the establishment of an ethical AI culture. It is expected that the proposed strategies for developing an ethical AI culture could enable organizations to design and deploy ethical AI systems without necessarily relying on the complex and ambiguous guidelines existing in the literature.
References
Akinrinola, O., Okoye, C. C., Ofodile, O. C., & Ugochukwu, C. E. (2024). Navigating and reviewing ethical dilemmas in AI development: Strategies for transparency, fairness, and accountability. GSC Advanced Research and Reviews, 18(3), 050-058.
Balasubramaniam, N., Kauppinen, M., Hiekkanen, K., & Kujala, S. (2022, March). Transparency and explainability of AI systems: ethical guidelines in practice. In International Working Conference on Requirements Engineering: Foundation for Software Quality (pp. 3-18). Cham: Springer International Publishing.
Carter, D. (2020). Regulation and ethics in artificial intelligence and machine learning technologies: Where are we now? Who is responsible? Can the information professional play a role? Business Information Review, 37(2), 60-68.
Dhuliawala, S., Komeili, M., Xu, J., Raileanu, R., Li, X., Celikyilmaz, A., & Weston, J. (2023). Chain-of-verification reduces hallucination in large language models. arXiv preprint arXiv:2309.11495.
Ehsan, U., Liao, Q. V., Muller, M., Riedl, M. O., & Weisz, J. D. (2021, May). Expanding explainability: Towards social transparency in AI systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1-19).
Eitel-Porter, R. (2021). Beyond the promise: implementing ethical AI. AI and Ethics, 1(1), 73-80.
Gu, K., Shang, R., Althoff, T., Wang, C., & Drucker, S. M. (2024, May). How Do Analysts Understand and Verify AI-Assisted Data Analyses? In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 1-22).
Guleria, A., Krishan, K., Sharma, V., & Kanchan, T. (2023). ChatGPT: ethical concerns and challenges in academics and research. The Journal of Infection in Developing Countries, 17(09), 1292-1299.
Gurevich, E., El Hassan, B., & El Morr, C. (2023, March). Equity within AI systems: What can health leaders expect? In Healthcare Management Forum (Vol. 36, No. 2, pp. 119-124). Sage CA: Los Angeles, CA: SAGE Publications.
Hagendorff, T. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30(1), 99-120.
Heilinger, J. C. (2022). The ethics of AI ethics. A constructive critique. Philosophy & Technology, 35(3), 61.
Hermann, E. (2022). Artificial intelligence and mass personalization of communication content—An ethical and literacy perspective. New Media & Society, 24(5), 1258-1277.
Heyder, T., Passlack, N., & Posegga, O. (2023). Ethical management of human-AI interaction: Theory development review. The Journal of Strategic Information Systems, 32(3), 101772.
Hu, M. (2024). National Security and Federalizing Data Privacy Infrastructure for AI Governance. William & Mary Law School Research Paper, (09-488).
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399.
Johnson, J. (2022). Delegating strategic decision-making to machines: Dr. Strangelove Redux? Journal of Strategic Studies, 45(3), 439-477.
Karpa, D., Klarl, T., & Rochlitz, M. (2022). Artificial intelligence, surveillance, and big data. In Diginomics Research Perspectives: The Role of Digitalization in Business and Society (pp. 145-172). Cham: Springer International Publishing.
Kempt, H., Heilinger, J. C., & Nagel, S. K. (2023). “I’m afraid I can’t let you do that, Doctor”: meaningful disagreements with AI in medical contexts. AI & Society, 38(4), 1407-1414.
Koshiyama, A., Kazim, E., Treleaven, P., Rai, P., Szpruch, L., Pavey, G., ... & Chatterjee, S. (2024). Towards algorithm auditing: managing legal, ethical and technological risks of AI, ML and associated algorithms. Royal Society Open Science, 11(5), 230859.
Larsson, S. (2020). On the governance of artificial intelligence through ethics guidelines. Asian Journal of Law and Society, 7(3), 437-451.
Memarian, B., & Doleck, T. (2023). Fairness, Accountability, Transparency, and Ethics (FATE) in Artificial Intelligence (AI), and higher education: A systematic review. Computers and Education: Artificial Intelligence, 100152.
Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1(11), 501-507.
Neri, E., Coppola, F., Miele, V., Bibbolino, C., & Grassi, R. (2020). Artificial intelligence: Who is responsible for the diagnosis? La radiologia Medica, 125, 517-521.
Prem, E. (2023). From ethical AI frameworks to tools: a review of approaches. AI and Ethics, 3(3), 699-716.
Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., ... & Barnes, P. (2020, January). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 conference on fairness, accountability, and transparency (pp. 33-44).
Reddy, S., Allan, S., Coghlan, S., & Cooper, P. (2020). A governance model for the application of AI in health care. Journal of the American Medical Informatics Association, 27(3), 491-497.
Richards, D., Vythilingam, R., & Formosa, P. (2023). A principlist-based study of the ethical design and acceptability of artificial social agents. International Journal of Human-Computer Studies, 172, 102980.
Ungerer, L., & Slade, S. (2022). Ethical considerations of artificial intelligence in learning analytics in distance education contexts. In Learning analytics in open and distributed learning: Potential and challenges (pp. 105-120). Singapore: Springer Nature Singapore.
Vemuri, N., & Venigandla, K. (2022). Autonomous DevOps: Integrating RPA, AI, and ML for Self-Optimizing Development Pipelines. Asian Journal of Multidisciplinary Research & Review, 3(2), 214-231.
Weidinger, L., Mellor, J., Rauh, M., Griffin, C., Uesato, J., Huang, P. S., ... & Gabriel, I. (2021). Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359.
Wong, A. (2020). The laws and regulation of AI and autonomous systems. Unimagined Futures–ICT Opportunities and Challenges, 38-54.
Wu, X. Y., Ding, F., Li, K., Huang, W. C., Zhang, Y., & Zhu, J. (2022). Analysis of the causes of solitary pulmonary nodule misdiagnosed as lung cancer by using artificial intelligence: a retrospective study at a single center. Diagnostics, 12(9), 2218.
Yang, E. (2023). The Digital Dictator's Dilemma. Work. Pap., Univ. Calif. San Diego. https://www. eddieyang. net/research/AI_dilemma. pdf.
Zembylas, M. (2023). A decolonial approach to AI in higher education teaching and learning: Strategies for undoing the ethics of digital neocolonialism. Learning, Media and Technology, 48(1), 25-37.