Medicine doctor and stethoscope in hand touching icon medical network connection with modern virtual screen interface, medical technology network concept

Data Talking to Machines: The Intersection of Deep Phenotyping and Artificial Intelligence

By Carmel Shachar

As digital phenotyping technology is developed and deployed, clinical teams will need to carefully consider when it is appropriate to leverage artificial intelligence or machine learning, versus when a more human touch is needed.

Digital phenotyping seeks to utilize the rivers of data we generate to better diagnose and treat medical conditions, especially mental health ones, such as bipolar disorder and schizophrenia. The amount of data potentially available, however, is at once both digital phenotyping’s greatest strength and a significant challenge.

For example, the average smartphone user spends 2.25 hours a day using the 60-90 apps that he or she has installed on their phone. Setting aside all other data streams, such as medical scans, how should clinicians sort through the data generated by smartphone use to arrive at something meaningful? When dealing with this quantity of data generated by each patient or research subject, how does the care team ensure that they do not miss important predictors of health?

An answer may be that digital phenotyping clinical teams will have to decide when and how to use machine learning/artificial intelligence. AI thrives on large data sets. In a review of digital phenotyping of severe mental illness published in the Harvard Review of Psychiatry, sixteen of the fifty-one included studies utilized machine learning. Most of these machine learning studies used a random forest approach to combine many small, weak decisions for one stronger, central prediction. This suggests that machine learning can be used successfully to harmonize various data streams that may not be strongly predictive on their own, but together can lead to successful digital phenotyping.

For example, smartphone psychotherapy chatbots could monitor a user’s typing speed and accuracy to determine when they might be experiencing distress, and then reach out to the user to deliver support or even therapy. But what is lost when the human clinicians are no longer intimately involved in diagnosing and treatment? Would a human clinician be better able to understand the nuances of the data gathered on the user? More effective at encouraging the user to pursue treatment? For some of these questions, more research is needed before we can attempt to answer.

Clinicians may also worry about over-relying on artificial intelligence to process digital phenotyping data because of the “black box” nature of machine learning algorithms. This term refers to the inability of humans to understand the processes being used by the programs.

Without the ability to “check the work” of AI, clinicians also lose the ability to course correct. We know that commercial algorithms used in medicine often inadvertently reflect racial and other biases present in our society. If clinicians do not understand the algorithms used in their digital phenotyping efforts, they run the risk of further enmeshing these biases in medicine. It is important that clinicians think through their ability to oversee any AI and machine learning used in their digital phenotyping efforts.

Perhaps the most significant concern for users will be data privacy. HIPAA protects medical data collected by physicians and other clinicians. Our smartphone chatbot, however, does not involve any traditional medical providers, and therefore is not bound by HIPAA. This creates an uneven playing field where medical providers are limited in their use of data, while non-traditional providers and technology companies are not.

While most digital phenotyping occurs in medical settings at this point, Facebook announced an algorithm that will monitor posts to identify if any users are exhibiting signs of suicidal thoughts and alert a Facebook team. Other technology companies are likely to also use digital phenotyping to better understand their users’ mental health and moods. In order to protect user privacy, we should consider a broader data protection regulatory scheme, such as the European Union’s General Data Protection Regulation (GDPR), to ensure that we do not privilege data in the clinic but leave it unguarded on our phones.

AI and machine learning can be helpful tools in expanding digital phenotyping. In turn, they may help improve the quality of our care, especially in the context of mental health. Despite the promise of these tools, clinicians pioneering the use of digital phenotyping should think carefully about the intersection of their machine and human resources. Further, regulators and policymakers should consider the blind spots in our data protection schemes and work to address them.

 

This post is part of our Ethical, Legal, and Social Implications of Deep Phenotyping symposium. All contributions to the symposium are available here.

Carmel Shachar

Carmel Shachar, JD, MPH, is Assistant Clinical Professor of Law and Faculty Director of the Health Law and Policy Clinic at the Center for Health Law and Policy Innovation of Harvard Law School (CHLPI). Previously, Shachar was the Executive Director of the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.