Should Glass Box AI be the Preferred Method for Health Care Related Analysis?
Keywords: Personal Information Protective Law (PIPL); General Data Protection Regulation (GDPR)
Introduction
Artificial intelligence (AI) is the concept and design of computer programs that can perform duties that initially needed human intelligence, such as critical decision-making, medical diagnosis, virtual assistance, fraud detection, and facial recognition. To most people, AI is an incomprehensible and uninterpretable innovation that collects or gathers billions of inputs and provides a response that individuals need to accept and implement in various areas of their lives. The perversity of the technology is felt in almost all areas of society, including the construction sector, e-commerce realm, and automotive industry. Companies like Tesla have developed self-driving cars that use AI to sense obstacles, make quick decisions, plan routes, and communicate with individuals through natural language processing (NLP). In e-commerce, enterprises have developed personalized goods recommendations, virtual shopping aids, and systems for client response review. Recently, AI has increasingly been adopted in the health industry to help manage the vast data gathered and transform it into meaningful information for accurate decision-making. Doctors are using AI algorithms to conduct diagnoses, prescribe medicines, and learn how to communicate with their patients. In cancer treatment, AI has personalized medication by analyzing many data types, from hereditary to ecological factors. Despite these great advancements in health brought about by AI integration, issues related to bias and governance challenges still linger. Bias in AI algorithms and data cause discrepancies and unfairness in service delivery, such as insurance processing. Governance issues like unexplainable feedback or complicated platforms/ways of providing feedback leave customers with a bad experience. Thus, stakeholders in the health sector are advocating for glass box AI or explainable AI that offers transparency on how the algorithm works, shifting from conventional black AI.
Difference between Glass Box and Black Box AI
Glass Box AI is a modern and transparent mode of AI that enables users to comprehend how the algorithm made a particular decision. It is called a "glass box" because individuals can look at or assess the algorithm like looking through glass. Conversely, conventional AI functions like a black box where the code arrives at decisions based on intricate calculations that are often challenging to understand and individuals cannot view. Black box algorithms are characterized by unseen input-output links and the absence of clarity around the internal workings. For example, a model or system may collect client attributes such as their income and age as input and output the amount they should pay for life insurance without discernible how. Some examples include deep learning (DL) algorithms that model intricate situations with extreme non-linearity and connections between inputs. Thus, in black box AI, the link between the data and the system's results is less understandable compared to a white box model. Black box AI does not exhibit open steps indicating why the algorithm made a decision or prediction. Sometimes, people may not have an issue with the tradeoff between the algorithm's performance and the explanation. For instance, in a health facility, patients may be disinterested in why they were all approved for insurance if none of them missed out. However, they will likely require an explanation of how the system works when a close family member has missed out on insurance compensation.
The shift from a black box to a glass box implies that users understand the parameters involved in models, such as decision trees, and how they will conclude. Therefore, in an ideal environment, the glass box AI models are entirely transparent, but the algorithm may be explainable to a limited degree in other cases. Such scenarios give rise to translucent glass boxes where the opacity degree may range from zero to one hundred percent. The lower the opaqueness of the translucent glass box, the higher the comprehension of the model and, consequently, increased trust from the public. According to Arrieta et al. (2020), glass box AI is created following two main approaches: formulating transparent models from scratch and wrapping black box prototypes with a coating of explainability. The wholly transparent algorithms are founded on linear regression, rule-based learning, and Bayesian models. These models are transparent because they are comprehensible. The ones wrapped with a layer of explainability, called post-hoc models, include those created using neural networks and deep learning. They need further explanation in the form of simplification, text illustrations, and visual display.
Explainable glass box AI principles include explanation, meaningfulness, elaboration accuracy, and knowledge limits. The first principle of explanation asserts that the models should offer accompanying proof or reasons for results and processes. The second principle of meaningfulness requires algorithms to offer explanations that users can comprehend. The principle of accuracy directs programmers to ensure systems can provide steps toward the produced results. Finally, the knowledge limits principle asserts that a system only works under the constraints for which it was designed and when it attains adequate confidence in its outcome. However, Vale et al. (2022) assert that AI post-hoc algorithms cannot assure the insights they produce, implying that they cannot be relied on as the sole approach to ensure the impartiality of system outcomes in critical industries such as healthcare. Therefore, it leaves the idea of formulating transparent models from scratch as the ideal method for developing AI glass models in healthcare.
Benefits of Glass-Box AI in Healthcare
White box AI offers many benefits to healthcare, assisting in resolving governance issues. The technology allows hospitals to comply with newly enacted regulations on data transparency, reduce biases, and improve communication. White box AI transparency can also help address security issues in healthcare.
Helps in the Compliance of Laws and Regulations
Glass box AI can assist healthcare institutions in complying with laws that mandate the explainability of AI algorithms. The EU's GDPR requires health organizations to observe transparency in data processing. In reality, the intricacy of AI systems makes it challenging for firms to abide by this requirement. In the U.S., the HIPAA regulations contain the Openness and Transparency principle that emphasizes the need for patients to comprehend what discretely identifiable health data exists about them, how it was gathered, and how it was used. In China, article 7 of the PIPL asserts that health facilities shall follow the values of openness and transparency associated with working with personal data, outlining the aim, technique, and handling scope. These transparency regulations call for the adoption of glass box AI in healthcare, where users can see and comprehend gathered data and the logic behind every decision.
Reduced Biasness
Bias is one of the issues in AI models, arising from both the data and the algorithm. The data used to train AI models may be biased due to glaring errors. For instance, a dataset used for training AI to forecast the behavior of a vulnerable population may have an overrepresentation of a certain gender. In such cases, the model's outcome will favor the overrepresented group. Developers can also develop biased algorithms that make wrong decisions because of inherent assumptions they made. Adopting glass box AI can assist users in differentiating whether a system has a good or bad bias and expose the more significant attributes in the algorithm's decision-making. Although explainable AI does not detect bias, it allows people to comprehend why the model made a specific choice.
Crowdsourced testing is another way programmers can eliminate or reduce bias in AI systems. It implies inviting a large number of people, also called the "crowd," physically or virtually, to participate in the model testing process rather than depending solely on an internal IT team of testers. Thus, crowdsourced testing utilizes the varying abilities and perceptions of a wide community of testers. Often, this group of testers is independent and impartial. They can offer constructive feedback on the functionality of algorithms and attributes of the datasets void of any interior biases or preconceived notions regarding the system. Therefore, crowdsourced testing in glass box AI can assist healthcare organizations in identifying bias in datasets and algorithms, promoting fairness in their systems.
Helps Address Governance Challenges in Healthcare
Healthcare governance is marred with transparency, communication, and interpretability challenges, which can be addressed using glass box AI. Transparency is critical in healthcare because decisions made by the management on issues such as insurance and those made by physicians on diagnosis must be audited by humans. Glass box AI allows patients and staff to verify the model's decision-making criteria and unearth any glaring errors and biases. Users can easily interpret the health systems' decisions by tracing them back to the variables and data that informed the choices. Access to comprehensible explanations of AI-founded decisions and recommendations enables meaningful interaction between patients and physicians. Patients and their guardians can ask relevant questions regarding their treatment choice, enhancing communication with healthcare providers and ensuring they are all at par regarding the medical plan or course of action. This transparency builds trust between patients and the management, fostering confidence in AI-driven healthcare. However, other studies have also shown that explainable AI can also offer misleading information on the reasons for the decision made. Le Merrer and Trédan (2020) assert that offering explanations cannot hinder a remote system in healthcare from lying regarding the real reasons for the choices made, undermining the idea of remote openness. Regardless, healthcare trust is critical because vital decisions might be made based on the output of the AI model.
Improve System Security
Glass box AI can help solve the security issue affecting governance in healthcare, given the vast data personal data collected by AI systems. Cybercriminals launch attacks targeting patients' sensitive information, such as payment details. Scholars have identified input attacks as one of the common cyber-attacks in healthcare (Kiener, 2021). This threat greatly undermines AI systems by manipulating input datasets, such as creating minor alterations to MR images to make the AI model output an incorrect result. However, in white box AI algorithms, physicians can review the reasoning behind the wrong outcome and probably identify the manipulation. Disparities appear when the model proposes lines of treatment that differ from the underlying situation of the individual client. The AI model can substitute an interrelationship in variables for causation, suggesting a wrong medication. It might be impossible for the health provider to notice the disparity in black box AI, unlike in explainable AI, where it is easy to detect. Thus, white box AI can help healthcare governance address security issues in service delivery.
Drawbacks of Glass-Box AI in Healthcare
Despite the many benefits, glass box AI comes with various challenges, including loss of accuracy. It can also undermine safety and increase financial burden due to its complexity. Glass box AI challenges only strengthen it as programmers continue addressing them.
Loss of Accuracy
In certain scenarios, the need to design glass box AI might result in decreased algorithm performance. For instance, programmers may need to simplify some parts of the code to make the algorithm explainable. This simplification might lead to the model losing its precise prediction. The pursuit of explainability may also introduce some easy lines of code to the model to make it open, leading to a decrease in performance. Linear and rule-based models are often easy to follow but perform poorly compared to versatile algorithms such as DL. Thus, it might be challenging for health providers to achieve complete explainability in the real-world environment because stakeholders consider performance and accuracy more critical in service delivery. Ideally, programmers need to maintain a balance between explainability and accuracy of the systems. Van der Veer et al. (2021) conclude that the public might cherish explainability of AI models in healthcare less than in other industries and less than most experts assume, particularly regarding system accuracy. Therefore, it is critical to actively engage the public when designing healthcare policies on AI explainability.
Implementation Complexity
Glass box AI is also challenging for health providers to implement, making it more costly than black box AI. Health organizations have limited resources to implement systems and infrastructure needed for service delivery. This constraint forces the management to go for options that they can afford. The difficulty in implementing glass box AI arises from the models' intricacy and the vast amount of data they handle, which translates to increased development costs. As the algorithms become more intricate, their internal functioning becomes increasingly challenging to decode. Programmers can solve the challenge by integrating methods to extract elaborations from the algorithm's decision-making procedure. For example, if an AI machine indicates a medicine would work best for a patient, the decision should be accompanied by code explaining why the choice was the best. The complexity of implementing explainable AI will reduce as research continues and adoption in health institutions increases.
Reduced Security
While glass box AI helps healthcare governance enhance safety in service delivery by simplifying the detection and prevention of errors and malicious practices, it also introduces privacy and security issues. Gunning et al. (2019) note that explaining decisions made may expose sensitive information or demonstrate how to manipulate the system, for instance, through reverse engineering. A wholly transparent model may give a hacker a kickstart, stressing the need to factor in glass box AI's safety and privacy implications and adopt the right risk mitigation steps. It is especially essential to consider privacy in the healthcare industry, where the safety of critical data is a vital issue. Integrating the explainability of AI models with safety-preserving techniques, such as federated learning, can assist in creating a balance. A study by Vigano and Magazzeni (2020) recommends adopting explainable security (XSec) as an extension for glass box AI for increased safety. XSec is linked with exclusive and intricate attributes, involves varying stakeholders, and is naturally adaptable. Thus, white box AI's simplified nature can introduce security loopholes, undermining the system they should enhance.
Real-World Applications of White Box AI
The healthcare industry has implemented various white box AI systems, such as the explainable boosting machine (EBM) and case-based reasoning (CBR). The EBM is a white box AI that uses a tree-based machine learning (ML) algorithm. The process starts by learning every attribute by bagging slowly to ensure each feature is mastered at a time. The process is repeated many times, generating an 'N' number of small trees for every function. EBM can calculate the pairwise connections between the trees for every attribute, enhancing categorization accuracy without undermining explainability. A study by Magunia et al. (2021) used EBM to recognize ICU result predictors in a COVID-19 health center, which yielded an accuracy of 64%. CBR is also a white box AI model that outlines the aspect most critical to the prediction algorithm. The model is also called lazy learner because it does not need training to elaborate on a predictable outcome. For example, when dealing with a new case for classification, CBR searches through archived cases and picks the one with common attributes to the new case. Vasquez-Morales et al. (2019) used the CBR in a DL algorithm to predict factors associated with chronic renal conditions. The researchers noted CBR's decreased performance when dealing with too many archived cases, suggesting the introduction of a feature selection procedure. Thus, white box AI is increasingly being integrated into healthcare AI systems.
Conclusion
AI has penetrated the health industry, assisting physicians to attain precision medicine. AI models have become critical in analyzing patient data for personalized treatment. These advancements in AI in healthcare have prompted governments to formulate stricter laws and regulations to safeguard against cyber-attacks and increase transparency of collected data. These changes have shifted focus from black box AI to glass box AI, making models explainable to users. The principles of explainable of glass box AI include explanation, meaningfulness, elaboration accuracy, and knowledge limits. Glass box AI is set to help healthcare facilities comply with newly enacted laws, such as the GDPR of the EU requiring health organizations to observe transparency in data processing. Explainable AI models will also address bias in AI, differentiating whether a system is good bias or bad bias and exposing the more significant attributes in the algorithm's decision-making. Moreover, AI will help address critical governance issues like transparency, communication, and interpretability. However, glass box AI has its share of disadvantages, including loss of accuracy and implementation complexity. The design of glass box AI might come at the cost of decreased algorithm performance and difficulty in implementation due to its high complexity. The security of resulting explainable AI may also be undermined because offering explanations for arrived decisions may expose sensitive information or demonstrate how to manipulate the system. Thus, to reap the full benefits of glass box AI, the healthcare fraternity should consider these disadvantages by embedding explainable AI in all medical workflows from the onset rather than as post-hoc AI.
References
Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., ... & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion, 58, 82-115
Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., & Yang, G. Z. (2019). XAI—Explainable artificial intelligence. Science robotics, 4(37), eaay7120.
Kiener, M. (2021). Artificial intelligence in medicine and the disclosure of risks. AI & society, 36(3), 705-713.
Le Merrer, E., & Trédan, G. (2020). Remote explainability faces the bouncer problem. Nature machine intelligence, 2(9), 529-539..
Magunia, H., Lederer, S., Verbuecheln, R., Gilot, B. J., Koeppen, M., Haeberle, H. A., ... & Rosenberger, P. (2021). Machine learning identifies ICU outcome predictors in a multicenter COVID-19 cohort. Critical Care, 25, 1-14.
van der Veer, S. N., Riste, L., Cheraghi-Sohi, S., Phipps, D. L., Tully, M. P., Bozentko, K., ... & Peek, N. (2021). Trading off accuracy and explainability in AI decision-making: findings from 2 citizens’ juries. Journal of the American Medical Informatics Association, 28(10), 2128-2138.
Vásquez-Morales, G. R., Martinez-Monterrubio, S. M., Moreno-Ger, P., & Recio-Garcia, J. A. (2019). Explainable prediction of chronic renal disease in the colombian population using neural networks and case-based reasoning. Ieee Access, 7, 152900-152910.
Vigano, L., & Magazzeni, D. (2020, September). Explainable security. In 2020 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW) (pp. 293-300). IEEE.