top of page

Socio-Cultural Considerations of Ethically using aDBS Technology

Updated: Mar 31

When we look into what elements are considering the benefits and risks of using aDBS, we need to account for the socio-cultural influences on and as a result of using aDBS technology.


Biases in AI Models


Given the nature of AI Machine Learning models, the information gathered for powering algorithms in AI is contingent on the representation of the sample data respective to the natural occurrences of an output. In healthcare, the data used in a language model does not include a measurement of information relevant to an underrepresented demographic or vulnerable group, potential harm can ensue through inaccurate diagnoses or interpretations of medical information inputs. This has the potential to exacerbate already vulnerable populations who endure discrimination and stigmatization in health care1,2.


Algorithmic bias has been defined within the scope of healthcare as "the instances when the application of an algorithm compounds existing inequities in socioeconomic status, race, ethnic background, religion, gender, disability or sexual orientation to amplify them and adversely impact inequities in health systems.”1 The challenges associated with algorithmic bias arise from the sample of data collected and the biases of the AI developer that are reflected in the model1.


Transparency and traceability are both proposed solutions to combatting AI algorithmic biases specific to healthcare AI3. This can be done through intentional checks at the design and implementation processes in accordance with respective guiding ethical recommendations and policies3. This further includes determining diverse population sampling at the developmental level, and research with diverse populations in the effectiveness of such devices3. However, even with the intentional inclusion of a representative demographic in samples, there are limited ways to ensure that algorithms are eliminated from bias; this would include acquiring the largest possible dataset, which is an often unfeasible task due to financial, time, and data capacity constraints. However, actively working to ensure that sample data is representative of all members of a population and can mitigate some biases and strive to allow for medical devices with AI to be fairly and justly impactful on the societal level3.


Language Limitations on Consent and Use


While aDBS technology has the potential to substantially improve the quality of life for people with neurological conditions, the number of people who would be eligible for their use would be limited by language factors.


Language barriers persist at the health literacy and communication levels. At the stage of informed consent, healthcare providers need to be able to clearly disclose to a patient the risks and benefits of a treatment plan. This becomes a challenge, however, when the patient speaks a different language than the respective provider. Linguistic barriers can both compromise a patient's ability to legitimately consent to care, and also limit a patient's ability to communicate concerns or invaluable information about their health, leading to adverse outcomes4, While translational services can certainly help to bridge this gap, the issue is made worse when there are more complicated medical terms that involve extensive explanations, as well as if there are limited persons knowledgeable of the condition, programming, and methodologies related to a medical device, all of which are relevant to aDBS technology.


Patients may also consent to treatment further without understanding the full ramifications of bias in an AI model, or what negative outcomes may result in their treatment if demographic and health data relevant to their unique health profile is not represented in the sample1. To this point, regulations requiring extensive patient education and potential patient comprehension testing could identify and eliminate areas of miscommunication and misunderstanding as to what a patient is granting consent1. Further, there is a need to encourage multi-lingual and cross-cultural collaboration on advanced technologies in the medical, medical device, and AI field, to ensure that experts with different linguistic backgrounds can accurately communicate with patients about the device, the risks, and articulate clear answers to any of their relevant concerns.


Cultural Differences in Language Modeling


aDBS technology is currently limited to the interpretation of electronic neuronal pulses within movement-related areas of the brain, however other brain-computer interface devices may be moving toward discerning more complex items of the brain like cognitive processes5. While this type of BCI technology has the potential to vastly improve the quality of life of individuals with limitations due to severe paralysis, a model of the complexities of cognitive processes is rather difficult to fully encompass, particularly with the cultural differences that come with expressing and articulating feelings and emotions. If an AI model is trained on cognitive processes exclusive to one culture, the AI may misinterpret emotions in alternative contexts or within other cultural groups6. Further, body language and subtleties of expressing emotion may extensively differ across cultures and misinterpretation of cognitive processes can create further barriers and isolation of a user of the BCI.


To manage issues on this front, quality control of products before deployment is important to ensure that an array of cultures are accurately represented in the expressions of an AI model6. Additionally, including different perspectives and teams beyond the medical and technical staff levels at the development stage of AI can uncover gaps in cultural values, interpretations, and traditions that may not have been previously represented in the algorithm and would impact the quality of life of all possible users6.


Ultimately, the challenges related to overcoming such socio-cultural considerations in neuromodulatory AI-enabled medical devices can be supported by constructive policy driving the review processes and actively evolving to prioritize social and cultural elements relevant to all persons in our collective society. Further, continuous evaluation of socio-cultural factors in healthcare and in the AI development stages can help evolve AI creation processes to be inclusive and supportive of cross-cultural health improvements.


Citations:

  1. Panch, Trishan, Heather Mattie, and Rifat Atun. 2019. "Artificial Intelligence and Algorithmic Bias: Implications for Health Systems." Journal of Global Health 9 (2): 020318. https://doi.org/10.7189/jogh.09.020318.

  2. Maccaro, Alessia, Katy Stokes, Laura Statham, Lucas He, Arthur Williams, Leandro Pecchia, and Davide Piaggio. 2024. "Clearing the Fog: A Scoping Literature Review on the Ethical Issues Surrounding Artificial Intelligence-Based Medical Devices." Journal of Personalized Medicine 14 (5): 443. https://doi.org/10.3390/jpm14050443.​MDPI+4MDPI+4MDPI+4

  3. Zhang, Jie, and Zong-ming Zhang. 2023. "Ethics and Governance of Trustworthy Medical Artificial Intelligence." BMC Medical Informatics and Decision Making 23 (7). https://doi.org/10.1186/s12911-023-02103-9

  4. Claros, Edith. 2018. "Impact of Language Barriers on Patient Safety." RN Journal, March 2018. https://rn-journal.com/journal-of-nursing/impact-of-language-barriers-on-patient-safety.

  5. Drew, Liam. 2024. "Elon Musk’s Neuralink Brain Chip: What Scientists Think of First Human Trial." Nature, February 2, 2024. https://doi.org/10.1038/d41586-024-00304-4.

  6. Admin. 2024. "Emotional Blind Spots of Gen AI." Digital Health Insights, September 10, 2024. https://dhinsights.org/news/emotional-blind-spots-of-gen-ai.


Kommentare


bottom of page