How AI and Robotics Are Improving the Field of Mental Health

The world was already going through a mental health crisis before COVID-19 hit. It was reported that approximately 15 percent of people worldwide were suffering from some form of mental illness. This figure rises to 1 in 5 adults in the United States. Exacerbated by the pandemic, now more than 4 in 10 adults in the United States have reported symptoms of depression or anxiety.

study by Workplace Intelligence and Oracle revealed that 70 percent of respondents found the year 2020 to be more stressful, and this resulted in them experiencing more anxiety at work. The increased stress had an adverse effect on the mental health of 78 percent of the global workforce, which caused a reduction in work-life balance, burnout and depression.

Unfortunately, there is a shortage of qualified professionals to provide mental health treatment, with 40 percent of Americans lacking access to a mental health professional. Additionally, because of the costs associated with treatment and the stigma surrounding the disease, it is difficult to provide timely diagnosis and treatment.

Enter artificial intelligence (AI), machine learning (ML) and robots. Many initiatives are utilizing these technological avenues to help address the lack of resources for mental health treatment, as they are already doing for other sectors in the health care industry.

Detection of Mental Illnesses Using AI and ML

Researchers are working on developing ML algorithms that will be able to notify doctors when a patient is at risk of developing a mental health issue based on existing medical records. One study has already been successful at predicting which patients brought into a hospital with self-injuries are likely to attempt suicide in the future. Similarly, AI has been found to be effective at identifying signs of depression and post-traumatic stress disorder by analyzing speech patterns and facial expressions.

Researchers from the World Well-Being Project (WWBP) analyzed social media with an AI algorithm to pick out linguistic cues that might predict depression. The team analyzed half a million Facebook posts from people who consented to provide access to their Facebook status updates and medical records. The program was able to identify depression-associated language markers, which indicated that people suffering from depression express themselves on social media in different ways, such as making mentions of loneliness and using words such as “feelings,” “I” and “me.” According to the team’s research, linguistic markers could predict depression up to three months before a person receives a formal diagnosis.

The Massachusetts Institute of Technology (MIT) conducted a similar exercise and developed a neural network model that could detect depression in speech patterns from recorded text and audio of conversations. Other scientists have used technology to explore the way facial expressions and acoustic analysis, enunciation of words, tone and language could indicate suicide risk.

Companies Using AI to Tackle Mental Illnesses

In addition to researchers, there are many companies in the market working with AI in the mental health sector. Quartet developed a platform that flags possible mental conditions based on a patient’s physical condition. The platform can refer patients to a provider or a computerized cognitive behavioral therapy program. Lyra Health has a similar setup to Quartet in using data to connect patients with mental health providers, therapies and coaching programs.

Ginger created a chat application used by employers that provides direct counseling services to their employees. It analyzes speech patterns and uses its ML training from more than 2 billion behavioral data samples, 45 million chat messages and 2 million clinical assessments to provide a recommendation. meQuilibrium is another start-up that targets workplace mental health by using clinically validated employee engagement and behavioral change techniques. The company claims that using the platform can result in a 35 percent reduction in employee cynicism about work and that it can make employees half as likely to quit.

Lyra Health’s platform provides access to therapists and self-care tools. (Image courtesy of Lyra Health.)

The CompanionMX system has an app that allows patients being treated with depression, bipolar disorder and other conditions to create an audio log where they can talk about how they are feeling. The AI system analyzes the recordings and looks for changes in behavior to provide proactive mental health monitoring.

Woebot Labs, founded by clinical psychologists from Stanford University, has built an AI-powered chatbot that helps people process difficult situations with the help of systematic guidance, tracks users’ moods, and provides lessons, exercises and stories from the start-up’s clinical team. Clinical trials show that these techniques can help people with anxiety and depression not only cope with their conditions but also get better over time.

Woebot’s efficacy is based on self-administered diagnostic questionnaire scores. (Image courtesy of Woebot Labs.)

Childhood mental health is another critical area that is not given the attention it needs, especially considering the increasing ease of access to the Internet. Some companies are looking to address that. Bark has made a parental control phone tracker app using ML that monitors major messaging and social media platforms on a child’s phone to detect signs of cyberbullying, depression, suicidal thoughts and sexting on their phone. Cognoa created a developmental assessment in partnership with Harvard and Stanford for children aged one to eight. Clinical studies show the diagnostic tool is significantly more accurate than standard screeners working with the same age range.

Socially Assistive Robots (SARs)

Beyond diagnosis of illnesses through AI, a study by the Australian Centre for Robotic Vision and the Queensland University of Technology found that socially assistive robots (SARs) have the potential to help people deal with disorders such as depression, drug and alcohol abuse, and eating disorders. While still in the nascent stages, there are already a few SARs that are being used to provide social services.

NAO. (Image courtesy of SoftBank Robotics.)

Arguably, the most famous of them is the fully programmable humanoid robot called NAO, which has multiple sensors for touch, sound, speech and visual recognition. It interacts with users via an audio system thanks to speech recognition. NAO is also capable of movement, with both fall and fall recovery capabilities. Among its myriad uses, NAO has been used in research with children who have developmental disorders or disabilities, as well as for companionship.

The PARO therapeutic robot. (Image Courtesy Paro Robots.)

Paro is a robotic harp seal that comes equipped with five kinds of sensors (tactile, light, audio, temperature and posture) with which it can perceive people and its environment. Paro is used for companionship and stress reduction and aims to provide comfort similar to that found through animal therapy. (The seal was even featured on Netflix’s Master of None.)

Buddy, winner of the 2018 Best of Innovation Award, interacts with its owners on an emotional level. In addition to providing social interaction and care for the elderly, it acts as a personal assistant, smart home hub and home security robot.

Buddy The Emotional Robot. (Image courtesy of Buddy The Robot.)

Advantages of Using Technology for Mental Health Crises

The utilization of AI can provide a variety of benefits. The technology can be used to analyze data to search for trends, and it can use the data to suggest possible treatments and monitor a patient’s progress. This will reduce the workload on already overburdened professionals. It can also provide round-the-clock access to mitigate the effects of illnesses, especially when it can be difficult to access professionals. These tools are inexpensive, and hence are not prohibitive, compared to standard treatment methods.

Patients might also be more comfortable opening up to an impersonal program, as compared to a human physician, leading to more accurate diagnoses and treatment. According to the Oracle study, 68 percent of people felt that they would prefer talking to a robot, and 64 percent felt that robots and chatbots would signify a judgment-free zone.

From the perspective of treatment providers, AI can more efficiently direct professionals. It has already been shown that AI is successful at detecting signs of conditions like depression and post-traumatic stress disorder by analyzing speech patterns and facial expressions. Doctors can use these tools during patient meetings to serve as a backup, as they are usually busy with many appointments and might not notice when patients exhibit subtle signs of trouble. An AI tool can provide an additional level of diagnosis to ensure that a physician is made of aware of deeper issues that might not be apparent at first glance. Additionally, it can safeguard against human error or inherent bias on the part of the health care professional.

Many workplaces are beginning to understand the importance of their employees’ mental health as well. For example, Starbucks announced a partnership last year with Lyra to provide mental health care benefits to its employees. All its partners and eligible family members will have free access to 20 sessions a year with a mental health therapist.

In fact, the Oracle study reports that AI has already proved beneficial in improving the mental health of 75 percent of the global workforce. This has allowed workers to increase productivity, shorten their workweek, take much-needed time off and improve their job satisfaction and overall well-being.

Psychiatrists are on board as well. Sermo, a global social platform for physicians, conducted research into the impact of AI and ML on the mental health field. Only 4 percent of those surveyed believed that AI could replace human physicians. Conversely, 75 percent believed that it could assist in maintaining patient documentation, while 54 percent agreed that it could provide support in more accurately diagnosing patients.

Concerns About Technology in the Mental Health Field

Despite all its advantages, technology still has its limitations. Researchers at the Technical University of Munich (TUM) considered the ethical implications of using robot therapists and found that people may be more easily manipulated by robots than their fellow humans.

An outcome of corporations providing mental health services to their employees and customers is the storage of and access to mental health data. Not only does this increase the security risk of private information being leaked or hacked, it also leads to overarching ethical issues and concerns include harm prevention and various questions of data ethics. If corporations have access to an employee’s data, will that in any way negatively impact the employee’s future career growth? Will the company sell customer’s private data to third parties without permission? There is a lack of guidance in this regard on the development of AI applications.

Further challenges in the application of embodied AI include matters of risk assessment, referrals and supervision; the need to respect and protect patient autonomy; the role of nonhuman therapy; transparency in the use of algorithms; and specific concerns regarding the long-term effects of these applications on understanding illness and the human condition.

Empathy is also a quality that is considered necessary in the mental health field. It is a quality that is difficult to replicate, as it currently cannot be fully automated or programmed. This raises further ethical questions about whether a robot that has developed analytical skills, learning ability through ML, and communication is sentient.

What Does the Future Hold?

As technology in the field becomes ubiquitous, regulations that address ethical concerns need to stay ahead of the curve, given the pace of technological advancements. There is certainly a legal and ethical framework that needs to be developed, within which all such technologies should operate.

Regardless, AI and ML are here to stay and will only get better with time. While they are not meant to replace human health care professionals, they do provide an extra layer of checking. This advantage of technology is helping to reduce the stigma associated with mental health and allowing unprecedented access to the required treatment that patients need—and it will be exciting to see what the future has in store.

What do you think about the progress being made in this field? Do you have concerns about the ethics behind sharing private health data with corporations? What improvements can be made to ensure a secure environment for patients?