BINF G4008: Ethics and Fairness in Digital Health
Course Description: In this discussion-based class, students will learn about different aspects of ethics and fairness as related to healthcare data science. As a brief overview, we’ll cover (1) fairness [first, briefly from a general CS perspective and then from the healthcare perspective], (2) algorithmic approaches that can improve fairness in healthcare, (3) informed consent in using biomedical data, (4) the notion of privacy, (5) model explainability / transparency in AI models, and (6) regulations [intellectual property, FDA] pertaining to AI ethics/fairness in healthcare. The final project will be critiquing an existing implementation of an ethics/fairness setup.
Classes will meet Mondays from 2-4 in HSC (Hammer Health Sciences Library 305)
Course Requirements and Grading: Attendance is mandatory. Grading is based on class participation (20%), weekly write-ups (40%), and semester-long project (40%).
Schedule and Readings: See attached schedule below.
Academic Integrity: Columbia’s intellectual community relies on academic integrity and responsibility as the cornerstone of its work. Graduate students are expected to exhibit the highest level of personal and academic honesty as they engage in scholarly discourse and research. In practical terms, you must be responsible for the full and accurate attribution of the ideas of others in all of your research papers and projects; you must be honest when taking your examinations; you must always submit your own work and not that of another student, scholar, or internet source. Graduate students are responsible for knowing and correctly utilizing referencing and bibliographical guidelines. When in doubt, consult your professor. Citation and plagiarism-prevention resources can be found at the GSAS page on Academic Integrity and Responsible Conduct of Research (http://gsas.columbia.edu/academic-integrity).
Failure to observe these rules of conduct will have serious academic consequences, up to and including dismissal from the university. If a faculty member suspects a breach of academic honesty, appropriate investigative and disciplinary action will be taken following Dean’s Discipline procedures (http://gsas.columbia.edu/content/disciplinary-procedures).
Copying or paraphrasing someone’s work (code included), or permitting your own work to be copied or paraphrased, even if only in part, is not allowed, and will result in an automatic grade of 0 for the entire assignment or exam in which the copying or paraphrasing was done. Your grade should reflect your own work. If you believe you are going to have trouble completing an assignment, please talk to the instructor or TA in advance of the due date.
Resources (that will be updated as the semester continues):
Pilar Ossorio. Justice in MLHC; MLHC’19
|2/3/2020||Fairness outside of health||Fairness-aware Machine Learning - An extensive Overview. Dunkelau and Leuschel. 2019. Available online.|
|2/10/2020||Fairness in healthcare ||https://www.fairmlforhealth.com/
Dissecting racial bias in an algorithm used to manage the health of populations - Science. 2019. Available here. Fair ML Keynote talk + slides available here
Addressing fairness in prediction models by improving subpopulation calibration. Fair ML @ NeurIPS talk with slides.
|2/17/2020||Fairness||Algorithms on regulatory lockdown in medicine - Science Magazine, available here.
Ranganath et al’s review of challenges and opportunities in machine learning in health, available on Arxiv here.
|2/24/2020||Causality||Guest Lecturer: Amelia Averitt will talk about causality in biomedicine, and the implications in observational health data. For pre-reading, see this review of counterfactual causal inference and associated methods here.|
|3/2/2020||Model Checking||Co-designing checklists to understand organizational challenges and opportunities around fairness in AI - available here.
“The Human Body is a Black Box”: Supporting Clinical Decision-making with Deep Learning, available here.
|3/30/2020||Fairness||2 page project proposals due.
Guest speaker: Irene Chen.
Reading #1 - Treating health disparities with artificial intelligence. (available here)
Reading #2 - Health disparities and health equity: the issue is justice (available here)
|4/6/2020||Trust||AI-Mediated Communication: How the Perception that Profile Text was Written by AI Affects Trustworthiness - available here.
|4/13/2020||Safety and transparency||Reading #1 - The Mythos of Model Interpretability
Reading #2 - Explanation in Artificial Intelligence: Insights from the Social Sciences (available here).
|4/20/2020||Safety and transparency||https://arxiv.org/pdf/1708.01870.pdf|
|4/27/2020||Algorithmic Ethics||Guest lecture: Sandra Lee
Reading #1 - The Ethics of Algorithms: Mapping the Debate (available here).
Reading #2 - Towards an Ethics of Algorithms: Convening, Observation, Probability, and Timeliness (available here).