this post was submitted on 29 Jun 2023
29 points (100.0% liked)

Science

13000 readers
5 users here now

Studies, research findings, and interesting tidbits from the ever-expanding scientific world.

Subcommunities on Beehaw:


Be sure to also check out these other Fediverse science communities:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Researchers want the public to test themselves: https://yourmist.streamlit.app/. Selecting true or false against 20 headlines gives the user a set of scores and a "resilience" ranking that compares them to the wider U.S. population. It takes less than two minutes to complete.

The paper

Edit: the article might be misrepresenting the study and its findings, so it's worth checking the paper itself. (See @realChem 's comment in the thread).

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 6 points 1 year ago* (last edited 1 year ago) (3 children)

I feel like a lot of people are missing the point when it comes to the MIST. I just very briefly skimmed the paper.

Misinformation susceptibility is being vulnerable to information that is incorrect

  • @[email protected] @[email protected] It seems that the authors are looking to create a standardised measure of "misinformation susceptibility" that other researchers can employ in their studies so that these studies can be comparable, (the authors say that ad-hoc measures employed by other studies are not comparable).
  • @[email protected] the reason a binary scale was chosen over a likert-type scale was because
    1. It's less ambiguous to participants
    2. It's easier for researchers to implement in their studies
    3. The results produced are of a similar 'quality' to the likert scale version
  • If the test doesn't include pictures, a source name, and a lede sentence and produces similar results to a test which does, then the simpler test is superior (think about the participants here). The MIST shows high concurrent validity with existing measures and states a high level of predictive validity (although I'd have to read deeper to talk about the specifics)

It's funny how the post about a misinformation test was riddled with misinformation because no one bothered to read the paper before letting their mouth run. Now, I don't doubt that your brilliant minds can overrule a measure produced with years of research and hundreds of participants off the top of your head, but even if what I've said may be contradicted with a deeper analysis of the paper, shouldn't it be the baseline?

[–] [email protected] 2 points 1 year ago

Thanks for this. I'll freely admit I'm an idiot and didn't feel smart enough to understand the paper (see username). Clarification is much welcome.

I added the link to the paper to the body of the post.

[–] [email protected] 2 points 1 year ago

Not saying you're wrong at all, but I just did the test and it's kinda funny that the title of this article would certainly have been one of the "fake news" examples.

Obviously the study shows that the test is useful (as you pointed out quite well!), but it's ironic that the type of "bait" that they want people to recognize as fake news was used as the title of the article for the paper.

(Also, not saying the authors knew about or approved the article title or anything)

[–] [email protected] 0 points 1 year ago (1 children)

Thank you for this!

I have to say though, it's really interesting to see the reactions here, given the paper's findings. Because in the study, while people got better at spotting fake news after the game/test, they got worse at identifying real news, and overall more distrustful of news in general. I feel like that's on display here - with people (somewhat correctly) mistrusting the misleading article, but also (somewhat incorrectly) mistrusting the research behind it.

[–] [email protected] 1 points 1 year ago

That's a very interesting anecdote, now that you say it