Presentation Title

Data and Ethics: On the Ethics of Machine Learning for Suicide Prediction

Faculty Mentor

Dr. Alex Madva

Start Date

23-11-2019 11:00 AM

End Date

23-11-2019 11:15 AM

Location

Markstein 306

Session

oral 2

Type of Presentation

Oral Talk

Subject Area

behavioral_social_sciences

Abstract

Despite the fact that the rate of suicide has risen every year since 2006 , the ability of clinicians to predict suicidal behavior has remained at near-chance levels. Recently, new methods of assessing suicide risk which utilize machine-learning have emerged that are considerably more accurate than traditional approaches. New machine-learning approaches, particularly neural networks, have the potential to radically shift the way that suicidality is assessed and treated by mental health clinicians. However, the potential adoption of these algorithms raises important ethical questions as to both their design and implementation; and to date there has been insufficient attention given to identifying these questions, how these questions can be answered and what those answers may be. Therefore, to enable additional work on these fronts, this paper aims to fulfill four functions: 1) to review existing empirical research on machine learning and suicide prediction; 2) to review existing ethical literature on this topic; 3) to apply clinical ethics, including a proposal for how its principles can be used to guide a deeper analysis of the topic; and 4) to outline heretofore unidentified ethical concerns in this area. This paper’s primary objective is to formulate a practical and concise foundation for the discourse surrounding the ethics of machine learning for suicide prediction in order to allow future thinkers to better address this complex intersection of data and ethics.

This paper will address potential for type I and type II errors, how clinicians may engage and act based off this technology, how conceptual discussions may be framed improperly, how machine learning could compound existing flaws in the field of suicide prediction, bias within learning machines, and more. The aim is to provide an engaging text to enable the development of relevant questions in order to advance the presently limited moral discourse surrounding this forthcoming technology.

This document is currently not available here.

Share

COinS
 
Nov 23rd, 11:00 AM Nov 23rd, 11:15 AM

Data and Ethics: On the Ethics of Machine Learning for Suicide Prediction

Markstein 306

Despite the fact that the rate of suicide has risen every year since 2006 , the ability of clinicians to predict suicidal behavior has remained at near-chance levels. Recently, new methods of assessing suicide risk which utilize machine-learning have emerged that are considerably more accurate than traditional approaches. New machine-learning approaches, particularly neural networks, have the potential to radically shift the way that suicidality is assessed and treated by mental health clinicians. However, the potential adoption of these algorithms raises important ethical questions as to both their design and implementation; and to date there has been insufficient attention given to identifying these questions, how these questions can be answered and what those answers may be. Therefore, to enable additional work on these fronts, this paper aims to fulfill four functions: 1) to review existing empirical research on machine learning and suicide prediction; 2) to review existing ethical literature on this topic; 3) to apply clinical ethics, including a proposal for how its principles can be used to guide a deeper analysis of the topic; and 4) to outline heretofore unidentified ethical concerns in this area. This paper’s primary objective is to formulate a practical and concise foundation for the discourse surrounding the ethics of machine learning for suicide prediction in order to allow future thinkers to better address this complex intersection of data and ethics.

This paper will address potential for type I and type II errors, how clinicians may engage and act based off this technology, how conceptual discussions may be framed improperly, how machine learning could compound existing flaws in the field of suicide prediction, bias within learning machines, and more. The aim is to provide an engaging text to enable the development of relevant questions in order to advance the presently limited moral discourse surrounding this forthcoming technology.