DBMI Special Seminar Series: Toward Diversity, Equity, and Inclusion in Informatics, Health Care, and Society
The Columbia Department of Biomedical Informatics announced a series of talks entitled “DBMI Special Seminar Series: Toward Diversity, Equity, and Inclusion in Informatics, Health Care, and Society.”
These talks, which began during the 2021 spring semester and are open to the public, focus on informatics research topics related to diversity, equity, and inclusion and are part of the weekly DBMI Seminar, a 1-credit course for DBMI students who can benefit from hearing new methods of research from speakers from both academia and industry.
Seminars that are part of this series will be posted below, while upcoming seminars will be listed on the DBMI Seminar page.
Upcoming Seminars
Please check back prior to the 2023 fall semester to learn about upcoming DEI Special Seminars.
Previous Seminars
Speakers: Krystal Tsosie and Keolu Fox
Title: #DATABACK: Indigenous Genomic Data Justice for Indigenous Peoples
Abstract: Despite over a decade of efforts to increase diversity in genomic datasets, Indigenous peoples still constitute less than 1% of total representation. The answer, however, is not simply to recruit more Indigenous peoples because defaulting to old, problematic norms of broad consent can recreate cycles of data exploitation and extraction that benefit Indigenous peoples last. To move forward, we need to rethink data equity approaches that center principles of Indigenous genomic data sovereignty, which means employing new techniques in blockchaining and federated learning in addition to Indigenous-led bio-databanks. Hence, Drs. Tsosie and Fox are advocating for an Indigenous data justice approach that is truly responsive to genomic medicine and precision health innovation.
Bios: Krystal Tsosie, PHD, MPH, MA is an Indigenous (Diné/Navajo Nation) geneticist-bioethicist at Arizona State University’s School of Life Sciences and Center for Biology and Society. She co-founded the Native BioData Consortium, the first US Indigenous-led biobank and 501c3 nonprofit research institution. Much of her current research centers on ethical engagement with Indigenous communities in precision health through genetic epidemiology, public health, and computational approaches. She is also increasingly exploring machine learning approaches and using digital data tools to operationalize Indigenous genomic data sovereignty to foster Indigenous-led data solutions and build Tribal Nations’ capacity in technology, health, education, and local data economies.
Keolu Fox is the first Kānaka Maoli (Native Hawaiian) to receive a doctorate in genome sciences, and is an assistant professor at the University of California, San Diego, affiliated with the Department of Anthropology, the Global Health Program, the Halıcıoğlu Data Science Institute, the Climate Action Lab, the Design Lab, and the Indigenous Futures Institute. His work focuses on the connection between raw data as a resource and the emerging value of genomic health data from Indigenous communities. He has experience designing and engineering genome sequencing and editing technologies, and a decade of grassroots experience working with Indigenous partners to advance precision medicine. As an ENRICH Global Chair, Keolu will build a library for Indigenous health data in partnership with Indigenous communities. He will pilot a platform that will enable collecting and protecting Indigenous health data using Indigenous Data Sovereignty (IDS) principles, which provides a framework for allowing Indigenous communities themselves to manage and benefit from their own data. Ultimately, he hopes to create a replicable standard for Indigenous data sovereignty.
Speaker: Lauren Wilcox, Responsible AI & Human-Centered Technology in Google Research
Title: Participatory Approaches to Health AI
Per the speaker’s request, this session was not recorded.
Abstract: Advances in computing technology continue to offer us new insights about our health. As mutually reinforcing trends make the use of wearable and mobile devices routine, we now collect personal, health-related data at an unprecedented scale. Meanwhile, the use of deep-learning-based health screening technologies changes relationships between caregivers and care recipients, with multitudinous implications for equity, privacy, safety, and trust. How can researchers take inclusive and responsible approaches to envisioning solutions, training data, and deploying ML/AI-driven solutions? Who should be involved in decisions about how to use ML/AI in digital health and well-being solutions, and even what solutions matter in the first place?
In this talk, I will discuss participatory approaches to designing digital health and well-being technologies with impacted communities. Starting with field studies in clinics exploring how people navigated use of a deployed, diagnostic AI system, and moving onto lessons learned from an international study of how people with marginalized health needs navigate aspects of their health care, I will highlight the importance of taking participatory approaches to technology design, development, and evaluation.
Bio: Lauren Wilcox, PhD, is a Senior Staff Research Scientist and Group Manager in Responsible AI and Human-Centered Computing in Google Research. Her work builds on many years of experience conducting human-centered computing research in service of human health and well-being. Previously at Google Health, Wilcox led initiatives to align AI advancements in healthcare with the needs of clinicians, patients, and their family members. She also holds an Adjunct Associate Professor position in Georgia Tech’s School of Interactive Computing where she was a tenured associate professor. Wilcox was an inaugural member of the ACM Future of Computing Academy. She frequently serves on the organizing and technical program committees for premier conferences in the field. Wilcox received her PhD in Computer Science from Columbia University in 2013.
Title: Transforming the Health of Communities through Innovations in Social Computing
Speaker: Dr. Andrea Grimes Parker, Associate Professor at Georgia Tech
Watch This PresentationAbstract: Digital health research—the investigation of how technology can be designed to support wellbeing—has exploded in recent years. Much of this innovation has stemmed from advances in the fields of human-computer interaction and artificial intelligence. A growing segment of this work is examining how information and communication technologies (ICTs) can be used to achieve health equity, that is, fair opportunities for all people to live a healthy life. Such advances are sorely needed, as there exist large disparities in morbidity and mortality across population groups. These disparities are due in large part to social determinants of health, that is, social, physical, and economic conditions that disproportionately inhibit wellbeing in populations such as low-socioeconomic status and racial and ethnic minority groups.
Despite years of digital health research and commercial innovation, profound health disparities persist. In this talk, I will argue that to reduce health disparities, ICTs must address social determinants of health. Intelligent interfaces have much to offer in this regard, and yet their affordances—such as the ability to deliver personalized health interventions—can also act as pitfalls. For example, a focus on personalized health interventions can lead to the design of interfaces that help individuals engage in behavioral change. While such innovations are important, to achieve health equity there is also a need for complimentary systems that address social relationships. Social ties are a crucial point of focus for digital health research as they can provide meaningful supports for positive health, especially in populations that disproportionately experience barriers to wellbeing.
I will offer a vision for digital health equity research in which interactive and intelligent systems are designed to help people build, enrich, and engage social relationships that support wellbeing. By expanding the focus from individual to social change, there is tremendous opportunity to create disruptive interventions that catalyze and sustain population health improvements.
Bio: Andrea Grimes Parker is an Associate Professor in the School of Interactive Computing at Georgia Tech. She is also an Adjunct Associate Professor in the Rollins School of Public Health at Emory University and at Morehouse School of Medicine. Dr. Parker holds a Ph.D. in Human-Centered Computing from Georgia Tech and a B.S. in Computer Science from Northeastern University. She is the founder and director of the Wellness Technology Lab at Georgia Tech. Her interdisciplinary research spans the domains of human-computer interaction and public health, as she examines how social and interactive computing systems can be designed to address health inequities.
Dr. Parker has published widely in the space of digital health equity and received several best paper honorable mention awards for her research. Her research has been funded through awards from the National Science Foundation, the National Institutes of Health, the Aetna Foundation, Google, and Johnson & Johnson. Additionally, she is a recipient of the 2020 Georgia Clinical & Translational Science Alliance Team Science Award. Dr. Parker has held various leadership roles, including serving as co-chair for Workgroup on Interactive Systems in Healthcare (WISH) and as a member of the Johnson & Johnson / Morehouse School of Medicine Georgia Maternal Health Research for Action Steering Committee.
Title: Disability accessibility and fairness in Artificial Intelligence (AI)
Speaker: Cynthia Bennett, PhD, Senior Research Scientist at Google’s People + AI Research Group
Abstract: Artificial intelligence (AI) promises to automate and scale solutions to perennial accessibility challenges (e.g., generating image descriptions for blind users). However, research shows that AI-bias disproportionately impacts people already marginalized based on their race, gender, or disabilities, raising questions about potential impacts in addition to AI’s promise. In this talk I will overview broad concerns at the intersection of AI, disability, and accessibility. I will then share details about one project in this research space that led to guidance on human and AI-generated image descriptions that account for subjective and potentially sensitive descriptors around race, gender, and disability of people in images.
Bio: Dr. Cynthia Bennett is a Senior Research Scientist in Google’s Responsible AI and Human-Centered Technology organization. Her research concerns the intersection of AI ethics and disability. Bennett is regularly invited to speak; recent hosts include Stanford and Apple. Previously, Bennett has worked at Carnegie Mellon University, Apple, and the University of Washington. Her work has received grant funding from Microsoft Research and the National Science Foundation, and eight of her peer reviewed publications have received awards. Bennett is a disabled woman scholar working in the tech and academic sectors, and she does raising participation service. Bennett’s website is bennettc.com, and her Twitter handle is @clb5590.
Title: Standardizing the Unstandardizable: The Case of Sex and Gender
Abstract: In 2015, notice number NOT-OD-15-102 was released by the National Institutes of Health. The notice specified “consideration of sex as a biological variable” (SABV), requiring submission of information regarding this new construct from 2016 onward. However, despite this imperative explicitly citing enhancement of reproducibility, it did not lay out any conceptualization of what SABV meant, in non-human animal or human contexts, and it relied heavily on binarist and gender essentialist assumptions, which have ultimately confused the situation further. This confusion has led to SABV being co-opted by transphobic and intersexphobic organizations and individuals, while not necessarily impacting reproducibility. Why are sex and gender such complicated variables to consider? How did these constructs come to exist within the purview of scientific analysis? And what work is being done to untangle the current situation? This talk will aim to discuss these questions, while also considering the deeper ideologies underlying current scientific research and sociopolitical agendas, and how they affect effective modeling of sex and gender constructs in informatics and beyond.
Bio: Clair Kronk (she/her) is a postdoctoral fellow at the transitioning Yale Center for Medical Informatics (YCMI). She is the creator and sole author of the first LGBTQIA+ controlled vocabulary for usage in health care settings, the Gender, Sex, and Sexual Orientation (GSSO) ontology, which contains information on over 15,000 terms. Dr. Kronk has provided valuable input on GSSO standards for a number of organizations, including the Health Level 7 (HL7) Gender Harmony Project (GHP), the Systematized Nomenclature of Medicine (SNOMED), Canada Health Infoway (CHI), the International Organization for Standardization (ISO), Queensland Health, the National Academies of Sciences, Engineering, and Medicine (NASEM), the United States Core Data for Interoperability (USCDI), the World Health Organization (WHO), the Trans Metadata Collective (TMDC), the Homosaurus, Wikidata, and the American Medical Informatics Association (AMIA) Diversity, Equity, and Inclusion (DEI) Task Force.
Title: Algorithmic bias and data platforms
Abstract: We’re increasingly aware of the many ways that algorithms can encode and scale up racial bias. When designed with careful attention to label choice, algorithms can also be used to counter biases present in the health care system and ingrained in medical knowledge. To do so effectively, researchers and product developers must have access to platforms on which they can access health data for the benefit of patients and society.
Bio: Ziad trained as an emergency doctor – and he still gets away as often as he can, to a hospital in rural Arizona, to work in the ER. But these days, he spends most of his time on research and teaching at Berkeley. Inspired by his clinical practice, he builds machine learning algorithms that help doctors make better decisions. He also studies where algorithms can go wrong, and how to fix them: his work on algorithmic bias has been highly influential both in public debate about algorithms, and in regulatory oversight and civil investigations. He is a Chan Zuckerberg Biohub Investigator, a Faculty Research Fellow at the National Bureau of Economic Research, and has been named an emerging leader by the National Academy of Medicine. His work has won numerous awards, and appeared in a wide range of journals (Science, Nature Medicine, the New England Journal of Medicine, leading computer science conferences). He is a co-founder of Nightingale Open Science, a non-profit that makes massive new medical imaging datasets available for research, and Dandelion, a platform for AI innovation in health. Before coming to Berkeley, he was an Assistant Professor at Harvard Medical School and a consultant at McKinsey & Co.
Title: Advancing Health Equity through the use of Data
At the presenter’s request, this session was not recorded.
Bio: Julia Iyasere, M.D., is the Executive Director of the Dalio Center for Health Justice at NewYork- Presbyterian. In this role, she leads the Center’s efforts to address longstanding health inequities due to race, socio-economic differences, limited access to care, and other complex factors that impact the wellbeing of our communities. Dr. Iyasere attended Yale University for her B.S. in Biology and Columbia University for her M.D./M.B.A. After completing her residency in Internal Medicine at Columbia, Dr. Iyasere joined the Division of General Medicine at Columbia in 2012. Prior to her current role, Dr. Iyasere was the Associate Chief Medical Officer for Service Lines and the Co-Director of the Care Team Office at NYP. An Assistant Professor of Medicine, Dr. Iyasere continues to see patients as an internist in the Section for Hospital Medicine at Columbia.
Title: Using Machine Learning to Increase Equity in Healthcare and Public Health
Abstract: Our society remains profoundly unequal. Worse, there is abundant evidence that algorithms can, improperly applied, exacerbate inequality in healthcare and other domains. This talk pursues a more optimistic counterpoint — that data science and machine learning can also be used to illuminate and reduce inequality in healthcare and public health — by presenting vignettes about women’s health, COVID-19, and pain.
Bio: Emma Pierson is an assistant professor of computer science at the Jacobs Technion-Cornell Institute at Cornell Tech and the Technion, and a computer science field member at Cornell University. She holds a secondary joint appointment as an Assistant Professor of Population Health Sciences at Weill Cornell Medical College. She develops data science and machine learning methods to study inequality and healthcare. Her work has been recognized by best paper, poster, and talk awards, an NSF CAREER award, a Rhodes Scholarship, Hertz Fellowship, Rising Star in EECS, MIT Technology Review 35 Innovators Under 35, and Forbes 30 Under 30 in Science. Her research has been published at venues including ICML, KDD, WWW, Nature, and Nature Medicine, and she has also written for The New York Times, FiveThirtyEight, Wired, and various other publications.
Title: Achieving TechQuity
(seminar was not recorded at the request of the Dr. Clark)
Abstract: Open discussions of social justice and health inequities may be an uncommon focus within information technology science, business, and health care delivery partnerships. However, the COVID-19 pandemic—which disproportionately affected Black, indigenous, and people of color—has reinforced the need to examine and define roles that technology partners should play to lead anti-racism efforts through our work. In this hour, we will discuss the imperative to prioritize TechQuity, and addressing social contexts in the implementation of AI and other technologies.
Bio: Cheryl Clark MD, ScD, is an Assistant Professor of Medicine at Harvard Medical School and a Hospitalist, social epidemiologist and Associate Chief in the Brigham and Women’s Hospital Division of General Medicine and Primary Care for Equity Research & Strategic Partnerships. Dr. Clark’s research focuses on social determinants of cardiometabolic health in diverse and aging populations. She is principal investigator for community engagement in the New England hub of the National Institutes of Health All of Us Research Program and chaired the social determinants of health (SDOH) Task Force that developed the SDOH participant provided information survey for All of Us. Dr. Clark serves on the Mass General Brigham Predictive Analytics committee to provide equity review of algorithms considered for clinical implementation. Dr. Clark chaired the COVID-19 equity response team during the early phase of the COVID-19 pandemic in 2020. She is the inaugural recipient of the Equity, Social Justice and Advocacy Award from Harvard Medical School and Harvard School of Dental Medicine.
Title: Racial and Ethnic Differences in Genetic Testing Uptake and Results among Young Breast Cancer Survivors: Looking Ahead at Future Work
(seminar was not recorded at the request of the Dr. Jones)
Abstract: Genetic testing for hereditary breast and ovarian cancer (HBOC) syndrome (e.g., BRCA1/2 genes) is recommended for all young women diagnosed with breast cancer at ≤ age 45, yet there is an underutilization of this critical test among this population. In this presentation, I will provide an overview of the current landscape of genetic testing and discuss my program of research that focuses on racial and ethnic differences in genetic testing uptake and results among young breast cancer survivors (YBCS). In addition, I will provide an overview of my current and future work including our innovative web-based decision aid intervention, RealRisks, that we are adapting for racially/ethnically diverse young breast cancer survivors in order to increase access to genetic testing and family risk communication. A special emphasis is placed on promoting health equity and reducing cancer health disparities.
Bio: Dr. Jones is an Assistant Professor of Nursing at the Christine E. Lynn College of Nursing at Florida Atlantic University. She obtained a Bachelor’s of Science in Nursing degree from Seton Hall University and a Master’s of Science in Nursing degree from the Catholic University of America with a specialization in community/public health nursing and the care of immigrants, refugees, and global health. She holds a certification as an advanced public health nurse (PHNA-BC). She obtained a Doctor of Philosophy (PhD) in Nursing degree from Duquesne University and completed a post-doctoral research fellowship at Dana Farber Cancer Institute and Harvard Medical School.
Her research focuses on cancer prevention and control, risk-communication, and risk-reduction. Her current work focuses on improving uptake of genetic testing for breast cancer risk (i.e., BRCA1/2 genes and multigene panel testing) through culturally appropriate interventions, to facilitate informed decision-making for cancer risk-reducing strategies, and to promote family risk communication among young breast cancer survivors and their at-risk family members, with a particular emphasis on Black and Hispanic women. Her research is supported by the National Institute of Health (NIH) and the DAISY Foundation.
Talk title: Are phenotyping algorithms fair for underrepresented minorities within older adults?
Abstract: The widespread adoption of machine learning (ML) algorithms for risk-stratification has unearthed plenty of cases of racial/ethnic biases within algorithms. When built without careful weightage and bias-proofing, ML algorithms can give wrong recommendations, thereby worsening health disparities faced by communities of color. Biases within electronic phenotyping algorithms are largely unexplored. In this work, we look at probabilistic phenotyping algorithms for clinical conditions common in vulnerable older adults: dementia, frailty, mild cognitive impairment, Alzheimer’s disease, and Parkinson’s disease. We created an experimental framework to explore racial/ethnic biases within a single healthcare system, Stanford Health Care, to fully evaluate the performance of such algorithms under different ethnicity distributions, allowing us to identify which algorithms may be biased and under what conditions. We demonstrate that these algorithms have performance (precision, recall, accuracy) variations anywhere between 3 to 30% across ethnic populations; even when not using ethnicity as an input variable. In over 1,200 model evaluations, we have identified patterns that indicate which phenotype algorithms are more susceptible to exhibiting bias for certain ethnic groups. Lastly, we present recommendations for how to discover and potentially fix these biases in the context of the five phenotypes selected for this assessment.
Bio: Dr. Juan M. Banda at his GSU lab, Panacea Lab, works on building machine learning, and NLP methods that help to generate insights from multi-modal large-scale data sources, with applications to precision medicine, medical informatics, as well as other domains. His research interests are not limited to structured data, he is also well-versed in extracting terms and clinical concepts from millions of unstructured electronic health records and using them to build predictive models (electronic phenotyping) and mine for potential multi-drug interactions (drug safety). Dr. Banda’s has published over 70 peer reviewed conference and journal papers and serves as an editorial board member of the Journal of the American Medical Informatics and Frontiers in Medicine – Translational Medicine, and a reviewer for JBI, nature Digital Medicine, nature Scientific Data, nature Protocols, PLOS One, and several other leading journals. Prior to being an assistant professor of Computer Science at Georgia State University, Dr. Banda was a postdoctoral scholar, then a research scientist at Stanford’s center of Biomedical Informatics. He is an active collaborator of the Observational Health Data Sciences and Informatics, and his work has been funded by the Department of Veteran Affairs, National Institute of Aging as well as NASA, NSF and NIH, and serves as a PC member and chair for several conferences and workshops including ICML, NeurIPS, FLAIRS, IEEE Big Data, among others.
Title: Multimorbidity Patterns Across Race/Ethnicity Stratified by Age and Obesity: A Cross-sectional Study of a National US Sample
(Due to the ongoing research, this seminar was not recorded)
Objectives: The objective of our study is to assess differences in prevalence of multimorbidity by race.
Methods: We applied the FP-growth algorithm on middle-aged and elderly cohorts stratified by race, age, and obesity level. We used 2016-2017 data from the Cerner HealthFacts® Electronic Health Record data warehouse. We identified disease combinations that are shared by all races/ethnicities, those shared by some, and those that are unique to one group for each age/obesity level.
Results: Our findings demonstrate that even after controlling for age and obesity, there are differences in multimorbidity prevalence across races. There are multimorbidity combinations distinct to some racial groups—many of which are understudied. Some multimorbidities are shared by some but not all races. African Americans presented with the most distinct multimorbidities at an earlier age.
Discussion: The identification of prevalent multimorbidity combinations amongst subpopulations provides information specific to their unique clinical needs.

