Why It is Important to Train and Educate Employees to Create a Culture of Ethical AI

Why It is Important to Train and Educate Employees to Create a Culture of Ethical AI

            The evolution of artificial intelligence (AI) has prompted genuine societal, legal, and ethical concerns over its mainstream deployment and adoption. Among the possible angles of contention, AI’s fairness and representation are conspicuous in many discourses. As technology permeates various sectors, ensuring that it is understandable and transparent is a technical challenge and a moral imperative. The development and use of AI can cause unintentional harm through a flawed data modeling algorithmic structure. Several case scenarios illustrate the value of inclusivity in technology development teams. The research shows that training and educating employees on the ethical necessity of fair models is also essential in building useful models. 

What is a Culture of Ethical AI?

            A culture of ethical AI prioritizes the well-being of everyone by ensuring that AI systems are transparent and designed to minimize harm as much as possible. For instance, AI systems should be fair to mitigate bias and discrimination (Lottu et al., 2024). Developers and organizations must conduct thorough risk assessments to identify potential negative impacts of such models on individuals and society. Dimensions of harm can be varied since they can be psychological, social, and even financial (Doya et al., 2022). Proactive measures can help to mitigate these risks to safeguard people’s well-being. For instance, creating a robust and resilient model protects against malicious attacks and can deter exploitative or malicious uses. A good model should be as neutral as possible and devoid of bias and discrimination. Huriye (2023) contends that the ethical implications of biased systems are two-fold: they either intensify existing societal prejudices or result in unfair treatment of individuals. Regular audit and assessment efforts are needed to mitigate such risks, implying that the best solution to the problem is technical and organizational. The technical aspect is reserved for the developers to create a model that can be impartial and neutral (Ahmed et al., 2023). It is also an organizational issue because effective technical solutions require inclusive inputs from diverse users. The culture of ethical AI prioritizes human values and positive societal impacts in model design and mainstream usage. 

Pertinent Employee Training Areas

Legal and Compliance

            Employees can better understand and comply with relevant laws through training and inclusion. For instance, firms in the European Union (EU) should comply with Article 22 of the General Data Protection Regulation (GDPR) (Hill et al., 2023). Article 22 prohibits decisions based solely on automated processing, including profiling. It protects individuals from the potentially harmful effects of automated decision-making and profiling and ensures that proper safeguards are in place when they are permitted (Intersoft Consulting, n.d.). Companies in the EU must train their staff members to comply with GDPR, especially clauses that touch on AI ethics that minimize the risks of legal disputes, litigation, and reputational damage. Training AI systems requires input data, sometimes from the end users (Meurisch & Mühlhäuser, 2021). The process of collecting this data should be legal and ethical. Cambridge Analytica gathered over 87 million Facebook users' data without explicit consent through Aleksandr Kogan’s personality quiz app "This Is Your Digital Life" (Nicholls, 2018). The app collected data from users who took the quiz and their Facebook friends, significantly expanding the data pool. Cambridge Analytica used AI and machine learning algorithms to analyze data collected from unaware users of the social media platform (Nicholls, 2018). Facebook was complicit in the scheme by allowing third-party developers to access user data via its platform. The input data used in AI systems must be ethically sourced based on the legal or statutory policies relating to the technologies’ utilization. 

Data Governance

            Data governance entails maintaining the integrity and security of information in enterprise systems. AI plays a critical role in data governance since it is efficient is gathering and analyzing large data sets (Gupta, 2024). However, organizations have to be aware of the risks associated with such powerful data processing capabilities. Training employees will ensure that they understand the value of high-quality data and the importance in developing reliable AI systems (Zirar et al., 2023). With proper training, individuals can identify and mitigate bias in data. The US Criminal Justice system implemented a COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm to assess the likelihood of defender recidivism (Hartmann & Wenzelburger, 2021). The algorithm was deservedly criticized for exhibiting racial bias. It was designed to produce recidivism risk scores based on a range of data points about the defendant, such as criminal history, demographic profile, and responses to the issued questionnaire. From the computed scores, black defendants were twice as likely as white defendants to be inaccurately grouped as high risk re-offenders (Lagioia et al., 2023). Conversely, white defendants were more likely to be incorrectly classified as low-risk. These biases potentially led to harsher pretrial conditions and sentences for black defendants compared to their white counterparts. The employees of Northpointe, the algorithm's creator, should have been trained on possible historical biases embedded in the criminal justice data to understand the context of data collection and how such biases can influence algorithm outcomes (Lagioia et al., 2023). Employees should be oriented on selecting training data that do not inadvertently perpetuate discrimination.

Privacy and Cybersecurity

            Employee security training programs should incorporate advanced threat simulations that mimic potential real-life breaches. The study by Govea (2023) shows the potential advantages of the approach to building cybersecure models. The experiment enabled trainees to handle a phishing attack in a controlled testing environment. Similarly, they replicated the same setup using DALL-E and found that the security breach visualization helped trainers better acclimatize to the scope and manner of security risks and possible breaches. The Govea (2023) study showed that AI can solve security problems of their creation. The capability offers a practical and proactive solution to data privacy and cybersecurity risks. Many organizations today are investing in cybersecurity technologies in light of the dangers that artificial intelligence poses (Mohamed, 2023). Through its subsidiary Chronicle (now Google Security Operations), Google has developed cybersecurity tools designed to detect and respond to threats at scale. The tool is integrated into various company products and services to protect users from phishing and malware attacks. It employs real-time threat intelligence to stay ahead of emerging risks and adapt to evolving attack tactics (Google Cloud, n.d.). One of its greatest advantages is scalability, which enables the protection of organizations of all sizes. Microsoft’s Azure Security Center and Defender products use similar technologies to identify and mitigate threats in real-time (Microsoft, 2024). To defend against threats, employees must first understand the type of threat to manage and how it occurs. 

Equitable Design

            Equitable models should be inclusive and considerate of all employee demographics. According to Lottu et al. (2024), creating a fairly representative AI design begins with allowing access to the technology. The method helps to equip employees with critical knowledge on creating fair and accurate models for all user demographics. It is also vital to develop an impartial and neutral model to input information (Chen, 2023). A heterogeneous development team can incorporate varying inputs and perspectives into the creative process and help build such a model. Development oversight is also essential, as shown by Microsoft’s AETHER committee (Microsoft, 2024). The company established the oversight body to oversee the development and use of its AI technologies. It plays an advisory role and can offer practical insights into the AI development process. While establishing such committees is commendable and critical to building equitable designs, their effectiveness and inclusiveness will depend significantly on the diversity of their members (Holmes et al., 2022). Companies need to establish policies that promote a culture of equity in technology development processes. 

Transparency and Explainability (Interpretability)

            Training also ensures that employees understand and adhere to regulations requiring AI transparency and explainability. Microsoft's InterpretML and Fairlearn open-source toolkits are illustrative examples of the importance of transparency (Microsoft, 2024). Interpret ML helps organizations achieve model explainability and can accommodate glass-box and black-box models. However, they do require some advanced knowledge to exploit fully. The glass-box models are more transparent and easy to understand than the black-box framework because they clearly explain how they arrive at their predictions or decisions (Linardatos et al., 2020). They use interpretable algorithms, such as decision trees and linear regression, to provide explicit explanations of how input data contributes to the output prediction. In contrast, the black-box model is quite sophisticated due to its opaque decision-making process that relies on deep neural networks to establish the correlation between input and output (Hassija et al., 2024). The attributes make the model less transparent in explaining predictions. Black box explainers, such as local interpretable model-agnostic explanations (LIME) and SHapley Additive exPlanations (SHAP), are difficult for most humans to interpret (Microsoft, 2024). Glass-box prioritize transparency and understanding, while black-box models prioritize accuracy and complexity. Nevertheless, both improve the accountability of AI models and are ideal tools for creating an inclusive and fair model (Linardatos et al., 2020). Graphical user interfaces can complement the models by showing how an AI model interprets or treats different data demographics, which is useful in quantifying bias. Training on AI transparency can help to build open interpretable models.

Conclusion

            To create and promote fairness in AI systems, firms should prioritize the input of diverse employee groups in model creation processes. Companies should also adopt ethical frameworks and guidelines that ensure AI systems align with societal values from the outset. Stringent regulations on AI use will increase as more advanced models are created. Given the technology’s iterative evolution, organizations should ensure that their models do not gather excessive data beyond what is needed. Oversight bodies such as GDPR need to continuously update their policies to reflect industry changes in the technology’s development cycle. High-risk application fields such as cybersecurity and legal enforcement will require rigorous oversight to mitigate the risk factors associated with flawed model output. Therefore, companies should actively pursue ethical development practices that are morally defensible, even in cases of unintended negative consequences. 

 


 

References

Ahmed, M. I., Spooner, B., Isherwood, J., Lane, M., Orrock, E., & Dennison, A. (2023). A systematic review of the barriers to the implementation of artificial intelligence in healthcare. Cureus, 15(10), 1-14. https://doi.org/10.7759/cureus.46454

Chen, Z. (2023). Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanities and Social Sciences Communications, 10(1), 1-12. https://doi.org/10.1057/s41599-023-02079-x

Doya, K., Ema, A., Kitano, H., Sakagami, M., & Russell, S. (2022). Social impact and governance of AI and neurotechnologies. Neural Networks, 152, 542-554. https://doi.org/10.1016/j.neunet.2022.05.012 

Google Cloud (n.d.). Google security operations-detect. https://cloud.google.com/security/products/security-information-event-management

Govea, J. (2023). Developing a cybersecurity training environment through the integration of OpenAI and AWS. Applied Sciences, 14(2), 679, 1-24. https://doi.org/10.3390/app14020679

Gupta, P. (2024). The role of AI in crafting a modern data governance. International Research Journal of Modernization in Engineering Technology and Science, 6(01), 2582-5208. http://dx.doi.org/10.56726/IRJMETS47962

Intersoft Consulting (n.d.). Art. 22 GDPR: Automated individual decision-making, including profiling. https://gdpr-info.eu/art-22-gdpr/#:~:text=22%20GDPR%20Automated%20individual%20decision,significantly%20affects%20him%20or%20her.

Hartmann, K. & Wenzelburger, G. (2021). Uncertainty, risk and the use of algorithms in policy decisions: A case study on criminal justice in the USA. Policy Sciences, 54, 269-287. https://doi.org/10.1007/s11077-020-09414-y

Hassija, V., Chamola, V., Mahapatra, A., Singal, A., Goel, D., Huang, K., Scardapane, S., Spinelli, I., Mahmud, M., & Hussain, A. (2024). Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence. Cognitive Computation 16, 45-74. https://doi.org/10.1007/s12559-023-10179-8

Hill, E. R., Mitchell, C., Brigden, T., & Hall, A. (2023). Ethical and legal considerations influencing human involvement in the implementation of artificial intelligence in a clinical pathway: A multi-stakeholder perspective. Frontiers in Digital Health, 5, 1-10. https://doi.org/10.3389/fdgth.2023.1139210

Holmes, W., Porayska-Pomsta, K., Holstein, K., Sutherland, E., Baker, T., Shum, S. B., Santos. O. C., Rodrigo, M. T., Cukurova, M., Bittencourt, I. I., & Koedinger, K. R.,

 (2022). Ethics of AI in education: Towards a community-wide framework. International Journal of Artificial Intelligence in Education, 32, 504-526. https://doi.org/10.1007/s40593-021-00239-1

Huriye, A. Z. (2023). The ethics of artificial intelligence: Examining the ethical considerations surrounding the development and use of AI. American Journal of Technology2(1), 37–44. https://doi.org/10.58425/ajt.v2i1.142

Linardatos, P., Papastefanopoulos, V., & Kotsiantis, S. (2020). Explainable AI: A Review of Machine Learning Interpretability Methods. Entropy, 23(1), 1-45. https://doi.org/10.3390/e23010018

Lagioia, F., Rovatti, R. & Sartor, G. (2023). Algorithmic fairness through group parities? The case of COMPAS-SAPMOC. AI & Society, 38, 459-478. https://doi.org/10.1007/s00146-022-01441-y

Lottu, O., Jacks, B. & Ajala, O. (2024). Towards a conceptual framework for ethical AI development in IT systems. World Journal of Advanced Research and Reviews, 21(3), 408-415. http://dx.doi.org/10.30574/wjarr.2024.21.3.0735

Meurisch, C. & Mühlhäuser, M. (2021). Data Protection in AI Services: A Survey. ACM Computing Surveys, 54(2), 1-38. https://doi.org/10.1145/3440754

Microsoft (2024). Responsible and trusted AI. https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/innovate/best-practices/trusted-ai

Mohamed, N. (2023). Current trends in AI and ML for cybersecurity: A state-of-the-art survey. Cogent Engineering10(2), 1-30. https://doi.org/10.1080/23311916.2023.2272358

Nicholls, S. (2018). The Facebook data leak: What happened and what’s next. https://www.euronews.com/business/2018/04/09/the-facebook-data-leak-what-happened-and-what-s-next

Zirar, A., Ali, S. I., & Islam, N. (2023). Worker and workplace Artificial Intelligence (AI) coexistence: Emerging themes and research agenda. Technovation, 124, 1-17. https://doi.org/10.1016/j.technovation.2023.102747

 

 

 

Previous
Previous

Leveraging Microsoft Premonition to Predict Epidemics

Next
Next

Healthcare Data Deserts and How the Underserved Are Not Benefiting from Big Data and AI