abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

هذه الصفحة غير متوفرة باللغة العربية وهي معروضة باللغة English

المقال

4 يناير 2021

الكاتب:
Rachel Thomas, Boston Review

Machine learning biases in health care threaten to further disempower patients

"Medicine's Machine Learning Problem", 4 Jan 2021

Data science is remaking countless aspects of society, and medicine is no exception... Machine learning is now being used to determine which patients are at high risk of disease and need greater support (sometimes with racial bias), to discover which molecules may lead to promising new drugs, to search for cancer in X-rays (sometimes with gender bias), and to classify tissue on pathology slides. Last year MIT researchers trained an algorithm that was more accurate at predicting the presence of cancer within five years of a mammogram than techniques typically used in clinics... But despite the promise of these data-based innovations, proponents often overlook the special risks of datafying medicine in the age of artificial intelligence.

Consider one striking example that has unfolded during the pandemic. Numerous studies from around the world have found that significant numbers of COVID-19 patients—known as “long haulers”—experience symptoms that last for months. Good estimates range from 20 to 40 percent of all patients, depending on the study design, perhaps even higher. Yet a recent study from Kings College—picked up by CNN and the Wall Street Journal—gives a much lower estimate, claiming that only 2 percent of patients have symptoms for more than 12 weeks and only 4 percent have symptoms longer than 8 weeks. What explains the serious discrepancy? It turns out that the Kings College study relies on data from a symptom tracking app that many long haulers quit using because it didn’t take their symptoms or needs into account, resulting in a time-consuming and frustrating user experience. Long haulers are already dealing with disbelief from doctors, and the inaccurate results of this study may cause further harm—casting doubt on the reality of their condition.

This case is not an isolated exception, and it is not just an object lesson in bad data collection. It reflects a much deeper and more fundamental issue that all applications of data science and machine learning must reckon with: the way these technologies exacerbate imbalances of power. Data is not inert; it causes a doctor to mistakenly tell a patient that her dementia-like symptoms must just be due to a vitamin deficiency or stress... As others have argued, the ethics of AI turn crucially on whose voices are listened to and whose are sidelined. These problems are not easily fixed, for the same reason they exist in the first place: the people most impacted—those whose lives are changed by the outcome of an algorithm—have no power, just as they are so often ignored when the tech is being built. Anyone excited about the promise of machine learning for medicine must wrestle seriously with the perils.