How to fight misinformation through research
Anyone who has lived through the COVID-19 pandemic won’t be surprised by the findings of new research from the UNSW Business School – that people are jumping to conclusions when reading studies with relatively small sample sizes.
This does not only extend to General public That is. The research (which had a sample of nearly 4,000 participants) was found to apply to a wide variety of participants, including graduate level statistics students and senior business leaders.
These findings from UNSW Business School’s Dr Siran Zhan, a senior lecturer in the School of Management and Governance, show how easily people jump to conclusions when reading studies, making it essential that journalists – and the general public – communicate and assimilate this. information with a critical eye.
In the study, “Relative insensitivity to sample sizes in judgments of frequency distributions,” Dr. Zhan and co-author Dr. Krishna Savani, a professor of management in the Department of Management and Marketing at the University Hong Kong Polytechnic, show people ignore sample sizes in their judgments and decisions and tend to be unduly confident in conclusions drawn from studies with as few as three participants.
“What surprised us was that when we looked at samples of university-level statistics students and seasoned senior managers who are expected to be trained in their education or professional work to make judgments and decisions based on sound statistical principles, they ignored the sample size as much as the audience,” says Dr. Zhan.
“It is particularly appalling to think that many important companies and public policy decisions could have been made based on unreliable results from small samples,” she says.
Dr Zhan says research shows that people may not have the right intuition about what counts as evidence, making it difficult to correctly use statistics and research evidence to guide their inferences and decisions.
The good news? The researchers also tested a way to prevent the spread of misinformation.
What is a sample size and why is it important?
Early in the COVID-19 pandemic, pharmaceutical and biotech company Moderna (MRNA) reported that its experimental vaccine was successful in eight volunteers. While only a small group of healthy volunteers were tested, reporters were quick to report the news, which was so well received it sent Moderna’s stock price up 20%.
Just hours after announcing the trial’s success, Moderna sold 17.6 million shares to the public, raising US$1.3 billion. While Moderna and several of its top executives profited from the boom, some critics say it overstated the significance of the vaccine trial and manipulated the market.
Examples like these demonstrate that most people don’t think too much about the importance of study size when making assumptions from articles they read in print and online.
“In other words, the general tendency of people to be unduly confident in conclusions drawn from small samples is out of step with statistical principles and can lead to poor judgment and bad decisions,” says Dr. Zhan.
So, in six experiments involving a total sample of 3,914 respondents, she tests whether people pay attention to variations in sample sizes, which vary by one or two orders of magnitude.
The results reveal that people pay minimal attention to variations in sample size by a factor of 50, 100, and 400 when making judgments and decisions based on a single sample.
“Even with a sample size of three, the average confidence level of participants was 6.6 out of 10, indicating that people have quite a high level of confidence in incredibly small sample data, consistent with previous research. “, says Dr. Zhan.
“As researchers, we realize that the same conclusion is much more believable from a sample of 3,000 than from a sample of 30. However, shockingly, the general population doesn’t seem to share that intuition,” she says.
What is an appropriate sample size?
With the increasing spread of disinformation and misinformation online, it is becoming increasingly important to make judgments about what is presented to us in the media.
“With the proliferation of statistics in the news media and in organizations calling for evidence-based decision-making, current findings indicate that people may not have the right intuition about what counts as evidence, which makes it difficult for them to properly use statistics and research evidence to guide their conclusions and decisions,” says Dr. Zhan.
But is there such a thing as the right sample size? Bigger is usually better, statistically.
“The average result of any sample is pulled or biased by outliers. But as your sample size increases, your sample gets closer to the population, which means fewer errors in estimation,” says Dr. Zhan.
“When the sample size is small (eg 30), any outlier has a much stronger effect on the mean, making your mean less reliable than when the sample size is large (eg , 3000).”
The only problem is the cost of time and money to collect data from a very large sample.
“In other words, when you estimate an effect from a sample (say, 500 customers), you are still trying to generalize your result to a population (say, your 13,974 existing customers), which in reality is too important for you to study thoroughly.”
“Therefore, a trade-off has to be made based on a solid statistical foundation so that we work with a statistically reliable but realistic sample size,” she says.
Study design to help prevent the spread of misinformation
Judgments and biases about research design and methodology not only affect what we read in the media; these judgments permeate nearly every aspect of our lives, from public policy to the workplace.
“Organizations evaluate employee performance based on a limited time window or a small number of projects (e.g., a monthly sales record or the last three projects). In these cases, contractors and Managers need to understand that their conclusions, no matter how substantial, may not be reliable if drawn from small samples,” says Dr. Zhan.
Therefore, Dr. Zhan’s research has important implications for media, journalists, policy makers, and businesses who often use sample results (sometimes small samples) to inform the public and make critical decisions.
To improve the quality of decisions, Dr. Zhan suggests that all statistics should be accompanied by statistical inferences and “lay interpretations” of statistical inferences.
“We recommend that more statistical guidance (i.e., layman’s interpretation of the strength of evidence statistics) be provided to aid in their interpretation of sample results and, ultimately, decision-making,” she says.
What does this look like in practice? “For example, the Environmental Working Group provides a searchable online database of skincare product safety information (example here) on two main scores: the strength of an effect (that is, i.e. hazard score) and strength of evidence (i.e. availability of data).
“Information about the availability of data is equivalent to the strength of evidence that we advocate here,” says Dr Zhan.
But what about consumers?
Consumers don’t always read research papers, so research typically reaches consumers through product information, news, and books. “Therefore, we recommend that statistics on the strength of evidence be presented alongside information on the availability of data,” says Dr Zhan.
“Consumers should be educated to question any claim unless there is strong evidence (i.e. a large amount of independent research involving large samples). But educating consumers is difficult more importantly, we believe the onus should be placed on corporations, journalists, and the media,” she said.
Siran Zhan et al, Relative insensitivity to sample sizes in judgments of frequency distributions: people are equally confident in the results of 30 versus 3,000 observations, Decision (2022). DOI: 10.1037/dec0000182
University of New South Wales
Quote: Size Matters: How Research Misinformation Can Be Tackled (2022, October 21) Retrieved October 22, 2022, from https://medicalxpress.com/news/2022-10-size-misinformation-tackled.html
This document is subject to copyright. Except for fair use for purposes of private study or research, no part may be reproduced without written permission. The content is provided for information only.