Summary of Seminar: Using Machine Learning to Increase Equity in Healthcare and Public Health (Emma Pierson)

This is an AI generated summary. There may be inaccuracies.
Summarize another video · Purchase summarize.tech Premium

00:00:00 - 00:55:00

Professor Pierson discusses how machine learning can be used to increase equity in healthcare and public health by identifying and correcting for discriminatory practices. She cites a recent paper she wrote on the topic as one of the main benefits of the approach.

  • 00:00:00 Professor Pearson discusses her work on using machine learning to reduce inequity in healthcare and public health. She identifies a unifying methodological theme that is related to the challenge of missing data.
  • 00:05:00 The talk discusses how human bias can lead to missed diagnosis and Overtesting in medical testing, particularly in police traffic stops. The talk also discusses howracial discrimination in police searches occurs, and how discrimination can be measured with a machine learning algorithm.
  • 00:10:00 The two tests used to assess whether discrimination is taking place in a search are the hit rate and the outcome test. The hit rate is the percentage of searches that result in an outcome for the searched group, while the outcome test looks at how often the searched group is actually found with contraband. The first test, the hit rate, is evidence of discrimination, while the second test, the outcome test, does not. The outcome test is flawed because it does not take into account that some groups are more likely to be searched than others.
  • 00:15:00 The seminar discusses a model for estimating search thresholds that are used to infer discrimination in police stops. The model assumes that when an officer stops a driver, they estimate the probability of the driver carrying contraband. The fast threshold test uses these models to parameterize the risk distributions, which makes the test run 100 times faster than a standard beta distribution.
  • 00:20:00 Seminar discusses how machine learning can be used to identify discriminatory practices in public health and healthcare, with examples including search rates, hit rates, and thresholds. It also discusses how these problems can be overcome by making assumptions and discussing how this could be applied in medicine.
  • 00:25:00 The goal of this project is to assess the relative prevalence of underreported conditions like intimate partner violence, which is difficult to do because the prevalence of underreporting varies by group. This is important for quantifying disparities and for passing policy to ameliorate them.
  • 00:30:00 The video introduces two assumptions that machine learning methods must make in order to learn accurately: that the positive and negative true positives and true negatives are at least somewhat separable, and that the label frequency remains constant across the whole space. It then introduces a novel method for recovering relative prevalence, Relative Preference Estimation.
  • 00:35:00 The video discusses the methods used to estimate the relative prevalence of intimate partner violence (IPV) in groups of patients. The three assumptions used in the method are the no false positives assumption, the random diagnosis within groups assumption, and the patients with the same symptom are equally likely to have the condition assumption. The method works under the assumption that these assumptions are true, and if any of the assumptions is not true, the method provides a lower bound on the magnitude of disparity.
  • 00:40:00 The talk discusses a model for estimating the relative prevalence of a target group, using machine learning. The model is able to make accurate assumptions about epidemiology and predictive modeling, and can be applied to a variety of settings.
  • 00:45:00 The speaker discusses the importance of race in clinical medicine and how social constructs can have medical consequences. They then present preliminary data that shows family history is highly predictive for white patients, but not for black patients.
  • 00:50:00 In this talk, Professor Pearson discusses the importance of scrutiny of the use of race and clinical algorithms, highlighting the potential for health disparities when race is incorrectly used as a proxy for biological differences. He goes on to discuss a paper that looks at the potential for bias when using race correction.
  • 00:55:00 In this talk, Professor Pearson discusses how machine learning can be used to increase equity in healthcare and public health. He points to a recent paper he wrote on the topic as one of the main benefits of the approach. He also provides two citations on the topic of proxy use in medicine.

Copyright © 2024 Summarize, LLC. All rights reserved. · Terms of Service · Privacy Policy · As an Amazon Associate, summarize.tech earns from qualifying purchases.