Negative effects of using technology in mental healthcare

mental health

Emerging technologies are rapidly advancing for assessing, monitoring, and treating various mental health conditions, with the potential to broaden the range of available treatment options and improve the health and well-being of individuals and communities.

AI-powered chatbots, wearables, smartphone apps, predictive analytics tools, and immersive technologies such as virtual reality are examples of these technologies (VR).

Despite the benefits, there are some concerns about mental health technologies, such as reduced face-to-face contact, the effectiveness, quality, and safety of care, the exacerbation of health inequalities, and data privacy and security.

This post will look at some of the negative consequences and issues raised by the potential use of emerging technologies in mental healthcare.

Human connections

Technology can help people connect by connecting service users who may be geographically distant but have similar experiences and needs. This may be especially important for people with rare diseases who would not otherwise have the opportunity to meet.

As mental healthcare becomes more contactless and automated, it is critical to consider the impact that lack of human contact may have on those seeking treatment. In-person therapy, community participation, peer and group support, and other activities involving face-to-face interactions can improve clinical outcomes for many service users. Initial research into care experiences during COVID-19 has revealed that, while some people adapted quickly and appreciated the flexibility provided by remote care, others experienced heightened feelings of loneliness, isolation, disconnection from communities, and an overall deterioration of mental health.

Therapeutic relationship

Some service users may struggle to build trust with health professionals without face-to-face contact. The importance of therapeutic relationships in recovery could impact clinical outcomes. More research on the effects of cultivating therapeutic relationships with non-human agents is being called for. Despite recent advances in affective computing, automated systems are still far from comprehending an individual’s subjective experience of mental illness. It’s also unclear whether machines can fully replicate the complexity of human emotions and interactions.

Effectiveness and safety

Interventions delivered through digital technology have the potential to be extremely effective. Individuals suffering from phobias may find confronting their fears in virtual environments easier than in real ones, making VR treatments very effective and appealing to some.

Some people have expressed concern about the lack of evidence supporting certain technologies. Despite having millions of users, most commercially developed apps have not been subjected to rigorous scientific testing. When they have, the sample size is often small with no follow-up. Furthermore, studies are sometimes conducted by the developers of the apps rather than by independent research teams. As a result, it is not always clear whether these tools are effective or potentially harmful. There have been calls for more research into the effectiveness and safety of mental health apps and the implementation of more stringent regulatory frameworks. Wearable neurotechnologies have sparked similar concerns. Developers’ claims of efficacy are frequently based on the efficacy of the treatment on which a given product is based, such as tDCS, rather than on the product itself.

Accuracy

Concerns about the accuracy of diagnostic and prediction tools in clinical decision-making, especially if used without clinician involvement. It is critical to define accuracy. Accuracy is measured in some research studies by how closely the tool matches clinicians’ determinations.

Concerns about accuracy arise in nonclinical settings as well. Some have warned against using automated social media content analysis in decision-making in areas such as law enforcement, citing the low accuracy level commonly achieved by NLP tools.

Some ethnic and age groups are underrepresented in data sets on mental health. Technologies that rely on biased data sets will not have the same accuracy or predictive validity for those groups, potentially exacerbating inequalities in mental health care.

Access to care

Certain population groups have unequal access to mental health services. Children and young people, people from ethnic minorities, homeless people, older adults, refugees, and the poor are among those affected. Technology may improve access to care and assist in reaching some underserved populations. For example, virtual support may encourage people who would not otherwise seek help due to the stigma associated with mental illness to do so. Others may feel less judged or embarrassed when they reveal symptoms to virtual agents.

As the COVID-19 pandemic demonstrated, increased reliance on technology can exacerbate inequalities by excluding individuals and communities who have difficulty using or accessing technology or those for whom home is not a private or safe place, such as victims of domestic violence. Health and digital literacy, socioeconomic status, age, ethnicity, and level of education are all factors that influence access to and use of technology. Significantly, many people with mental health issues do not have access to or are hesitant to use technology, putting them at a disadvantage.

As mental healthcare systems rely more on technology, questions about which technological interventions developers prioritize and why arise. The most common mental health conditions, such as mild anxiety and depression, are the focus of mental health technologies. This could result in more technological interventions for common mental health conditions rather than rare ones.

Individual responsibility for health

Individual responsibility for mental well-being may increase as mental health technologies become more widely available and used. By increasing access to mental health information and encouraging self-reflection and self-care, technology could empower people to take responsibility for their health. People must, however, have access to technology and certain levels of health and digital literacy to be empowered. It is also necessary to consider the potential impact on individuals of increased medicalization of daily life. People may become overly preoccupied with changes in mood and behavior, causing anxiety or worry that these could be misinterpreted as symptoms of illness.

Data privacy and security

While some argue that all types of health data are equally sensitive, others argue that mental health data is especially sensitive. When personal information is collected and used in mental healthcare, it is critical to have clear data and privacy policies in place. Mental health technologies have become more prevalent in healthcare and non-healthcare settings. In that case, there are concerns that information about a person’s mental health could be used to justify unnecessary coercive interventions or commercial purposes the person did not intend.

Some have predicted an increase in cyberattacks on mental health service providers. Recent high-profile cyberattacks on service providers have shown that data breaches involving sensitive mental health information can have disastrous consequences for users and providers. Higher security standards are being advocated for to protect users and assist victims of attacks, as well as more research into the implications of mental health data breaches.

The importance of choice

By definition, emerging technologies are full of promise and are frequently accompanied by an optimism bias. However, it is critical to recognize that technology is not always a good or better solution for everyone and that different people will have different needs.

Concerns have been raised that focusing on technology solutions may divert resources from other important interventions, such as increasing social interactions or addressing the social determinants of poor mental health. This may impact support quality because mental well-being depends on several interconnected factors, such as social connectedness, housing, employment, and education. Assume that technological forms of mental health support become widely available. In that case, service users may be concerned about being left without a choice, especially if a lack of autonomy and choice has marked their care experience. These technologies should supplement rather than replace what is already available in clinical settings. Alternatives to technological interventions, including hybrid forms of support, should always be available.

Trust and acceptability

Some mental health technologies require a high level of monitoring. This could be perceived as overly intrusive, undermining trust in mental healthcare and the organizations that use digital technologies, with consequences for their acceptance and uptake. Remote monitoring, if used incorrectly, may increase mental distress and anxiety symptoms in people with mental health problems, harm the relationship between service users and clinicians, and violate the fundamental human right to privacy.

Concerns about whether people can always provide informed consent to mental health monitoring and support tools, particularly those delivered directly to consumers. Many people may be unfamiliar with new technologies. Users may, for example, consent to specific forms of surveillance and data collection without fully comprehending the implications of their actions.