last updated by Pluto on 2025-12-03 08:26:20 UTC on behalf of the NeuroFedora SIG.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
in The Transmitter on 2025-12-03 05:00:57 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
in The Transmitter on 2025-12-03 05:00:03 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
“It is certainly not standard medical practice to perform screening MRIs of the heart and abdomen,” says one expert
in Scientific American on 2025-12-02 20:00:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
Many would-be whistleblowers write to us about papers with nonexistent references, possibly hallucinated by artificial intelligence. One reader recently alerted us to fake references in … an ethics journal. In an article about whistleblowing.
The paper, published in April in the Journal of Academic Ethics, explored “the whistleblowing experiences of individuals with disabilities in Ethiopian public educational institutions.”
Erja Moore, an independent researcher based in Finland, came across the article while looking into a whistleblowing case in that country. “I started reading this article and found some interesting references that I decided to read as well,” Moore told Retraction Watch. “To my surprise, those articles didn’t exist.”
The article joins a long list of publications flagged for fake references, which can be hallucinations generated by a large language model like ChatGPT.
Moore ended up analyzing all of the paper’s 29 references and found at least 19 of them appear to be fabricated, by her count. Eighteen of the Google Scholar links in the online reference section turn up empty. Moore dug in further, searching for the article titles, common works by the authors, and the journal’s volumes and issues to triple check whether some portion of the reference may have been incorrect rather than made up.
What she found in many cases were a nonexistent article title by authors who have written other papers on the reference’s topic; a similarly titled article in a completely different journal by different authors; or (and sometimes, and) a journal volume, issue and page number leading to a totally different article than the one in the references.
The Journal of Academic Ethics is published by Springer Nature. Eleven of the fabricated references cite papers in the Journal of Business Ethics — another Springer Nature title.
“On one hand this is hilarious that an ethics journal publishes this, but on the other hand it seems that this is a much bigger problem in publishing and we can’t really trust scientific articles any more,” Moore said.
Yelkal Mulualem Walle of the Department of Information Technology at the University of Gondar in Ethiopia, and the corresponding author of the article, told us he and his coauthors used ChatGPT to generate the references. “However, the research is real and used real data,” he said by email.
Coauthors Seyoum Tilahun Gedefaw at the university’s College of Education and Haregot Abreha Bezabih of the Federal Ethics and Anti-Corruption Commission in Addis Ababa did not respond to a request for comment.
Michael Stacey, Head of Communications, Journals for Springer Nature confirmed the publisher is aware of the concerns raised. “We take all such concerns about papers we have published extremely seriously and are now looking into the matter carefully,” he told us.
Hallucinated references in general “are an area we are actively exploring,” said Chris Graf, research integrity director for Springer Nature. “This is more complex than it may at first appear, as references can be detailed by authors in a variety of different ways, often do not include DOIs, and simple tools to identify hallucinated references can produce false positives.”
This isn’t the first time Moore has come across fake references. Last year she gave a presentation, summarized on her blog, on fake references in Finnish master’s theses.
“I have a very old-fashioned sense of justice and think it is a duty to report scientific misconduct,” Moore said.
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on X or Bluesky, like us on Facebook, follow us on LinkedIn, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at team@retractionwatch.com.
in Retraction watch on 2025-12-02 19:04:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
A rapidly intensifying low-pressure system off the coast is keeping the worst of the snow away from Boston, New York City and Washington, D.C.
in Scientific American on 2025-12-02 18:26:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
Scientific American hosted an event at Morehouse School of Medicine to highlight medical advances in treating sickle cell disease and how far we still have to go
in Scientific American on 2025-12-02 17:00:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
The review was carried out and released by the Vaccine Integrity Project, which is dedicated to bolstering vaccines in the U.S.
in Scientific American on 2025-12-02 15:00:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
in Science News: Health & Medicine on 2025-12-02 14:00:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
Mathematics is not only an esoteric vocation but also indispensably alive and deeply human
in Scientific American on 2025-12-02 13:00:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
Scientists found evidence of a distant planet’s moon system forming
in Scientific American on 2025-12-02 11:45:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
I can’t believe I need to write this, but apparently, I do. Many journals and preprint sites are now overwhelmed with chatbot-generated submissions. This is bad, but at least it gets characterized as fraud, something that we need to defend against. What I find much more worrying is that respectable scientists don’t seem to see the problem with generating part of their papers with a chatbot, and journals are seriously considering using chatbots to review papers, as they struggle to find reviewers. This is usually backed up by anthropomorphic discourse, such as calling a chatbot “PhD level AI”. This is to be expected from the CEO of a chatbot company, but unfortunately, it is not unusual to hear colleagues describe chatbots as some sort of “interns”. The rationale is that what the chatbot produces looks like a text that a good intern could produce. Or that the chatbot “writes better than me”. Or that it “knows” much more than the average scientist about most subjects (does an encyclopedia “know” more than you too?).
First of all, what’s a chatbot? Of course, I am referring to large language models (LLMs), which are statistical models of text tuned on very large text databases. An LLM is not about truth, but about what is written in the database (whether right, wrong, invented or nonsensical), and it is certainly not about personhood. But the term chatbot emphasizes the deceptive dimension of this technology, namely, it is a statistical model that is conceived in such a way as to fool the user into believing that an actual person is talking. It is the intersection of advanced statistical technology and bullshit capitalism. We have become familiar with bullshit capitalism: a mode of financing based not on the expected revenues of the company that can be reasonably anticipated from a well-conceived business plan, but on closed-loop speculation about the short-term explosion of the share value of a company that sells nothing, based on a “pitch”. Thus, funders are apparently perfectly fine with a CEO explaining that his business plan to make revenue is to build a superhuman AI and ask it to come up with an idea. It’s a joke, right. But he still did not clearly explain how he would make a revenue, so not really a joke.
Scientists should not fall for that. Chatbots are essentially statistical models. No one is actually speaking or taking responsibility for what is being written. The argument that chatbot-generated text resembles scientist-generated text is a tragic misunderstanding of the nature of science. Doing science is not producing text that looks sciency. A PhD is not about learning to do calculations, code, or developing sophisticated technical skills (although it’s obviously an important part of the training). It is about learning to prove (or disprove) claims. At core, it is the development of an ethical attitude to knowledge.
I know, a number of scientists who have read or heard of a couple of 1970-1980s classics in philosophy or sociology of science will object that truth doesn’t exist, or even that it is an authoritarian ideal, and it’s all conventions. Well, at a time when authoritarian politicians are claiming that scientific discourse has no more value than political opinions, we should perhaps pause and reflect a little bit on that position. First of all, it’s a self-defeating position. If it is true, then maybe it’s not true. This alone should make one realize that there might be something wrong with that position. Sure, truth doesn’t exist. That is because any general statement can always potentially be contradicted by future observations. In science, general claims are provisional. Sure. But consistency with current observations and theories does exist, and wrongness certainly does exist too. And sure, scientific claims are necessarily expressed in a certain theoretical context, and this context can always be challenged. But on what basis do we challenge theories and claims? Well, obviously based on whether we think the theories are incorrect or partial or misleading, so on the basis of epistemic norms – don’t call it “truth” if you want to sound philosophically aware, but we’re clearly in the same lexical field.
So, “truth doesn’t exist” is a fine provocative slogan, but certainly a misleading one, unless its meaning is carefully unpacked. Science is all about arguing, challenging claims with arguments, backing up theories with reasoning and experiments, looking for loopholes in reasoning, and generally about demonstrating. Therefore, what defines scientific work is not the application of specific methods and procedures (these differ widely between fields), but an ethical commitment: a commitment to an ideal of truth (or “truth”, if you prefer). This is what a PhD student is supposed to learn: to back up each of their claims with arguments; to challenge the claims of others, or to look for loopholes in their own arguments; to try to resolve apparent contradictions; to think of what might support or disprove a position.
It should be obvious then that to write science is not to produce text that simply looks like a scientific text. The scientific text must reflect the actual reasoning of the scientist, which reflects their best efforts to demonstrate claims. This is precisely what a statistical model cannot do. Everyone has noticed that a chatbot can be cued to claim one thing and the contrary within a few sentences. Nothing surprising there. It is very implausible that everything that has been written on the internet is internally consistent, so a good statistical model of that text will never produce consistent reasonings.
Let us see now some concrete use cases of chatbots in science. Given the preceding remarks, the worst possible case I can see is reviewing. No, a statistical model is not a peer, and no it doesn’t “reason” or “critically thinks”. Yes, it can generate sentences that look like reasoning or criticisms. But it could be right, it could be wrong, who knows. I hear the argument that human reviews are often pretty bad anyway. What kind of argument is that? Since mediocre reviewing is the norm, why not just generalizing it? The scientific ecosystem has already been largely sabotaged by managerial ideologies and publishing sharks, so let’s just finish the work? If that’s the argument, then let’s just give up science entirely.
Other use case: generating the text of your paper. It’s not as bad, but it’s bad. Of course, there are degrees. I can imagine that, like myself, one may not be a native English speaker, and some technological help to polish the language could be helpful (I personally don’t use it for that because I think even the writing style of a chatbot is awful). But the temptation is great to use it to turn a series of vague statements into a nice-sounding prose. The problem is that, by construction, whatever was not in the prompt is made up. The chatbot does not know your study, it does not know the specific context of your vague statements. It can be a struggle to turn “raw results” into scientific text, but that is largely because it takes work to make explicit all the implicit assumptions that you make, to turn your intuitions into sound logical reasoning. And if you did not explicitly put it in the prompt in the first place, then the statistical model makes it up – there’s no magic. It may sound good, but it’s not science.
Even worse is the use of a chatbot to write introduction and discussion. Many people find it hard to write those parts. They are right: it is hard. This is because this is where the results get connected to the whole body of knowledge, where you try to resolve contradictions or reinterpret previous results, where you must use scholarship. This is particularly hard for students because it requires experience and broad scholarship. But it is in making those connections, by careful contextualized argumentation, that the web of knowledge gets extended. Sure, this is not always done as it should be. But scientists should work on that skill, not improve the productivity of mediocre writing.
One might object that there is already much story-telling in scientific papers currently written by humans, especially in the “prestigious” journals, and especially in biology (not mentioning neuroscience, which is even worse). Well yes, but that is obviously a problem to solve, not something we should amplify by automation!
Let me briefly comment on other uses of this technology. One is to generate code. This can be helpful, say, to quickly generate a user interface, or to find the right commands to make a specific plot. This is fine when you can easily tell whether the code is correct or not – looking for the right syntax for a given command is such a use case. But it starts getting problematic when you use it to perform some analysis, especially when the analysis is not standard. There is no guarantee whatsoever that the analysis is done correctly, other than by checking yourself, which requires understanding it. So, in a scientific context, I anticipate that this will cause some issues. When I review a paper, I rarely check the code (it is usually not available anyway) to make sure it does what the paper claims. I trust the authors (unless of course some oddity catches my attention). Some level of trust is inevitable in peer review. I can see the temptation of a programming-averse biologist to just ask a chatbot to do their analyses, rather than looking for technical help. But the result of that is likely to be a rise in the rate of hard-to-spot technical errors.
Another common use is bibliographic search. Some tools are in fact quite powerful, if you understand that you are dealing with a sophisticated search engine, not an expert who summarizes the findings of the literature or of individual papers. For example, I could use it to look for a pharmacological blocker of an ionic channel, which will generally not be the main topic of the papers that use it. The model will output a series of matching papers. In general, the generated description of those papers is pretty bad and untrustable. But, if the references are correct, I can just look up the papers and check for myself. It is basically one way to do content-based search and it should be treated like that, in complement of other methods (looking for reviews, following the tree of citations etc.).
In summary, no, chatbots are not scientists, not even baby scientists: science is about (at least should be about) proving what you claim, not about producing sciency text. Science is an ethical attitude to knowledge, not a bag of methods, and only persons have ethics. Encouraging scientists to write their papers with a chatbot, or worse, automating reviewing with chatbots, is an extremely destructive move and should not be tolerated. This is not the solution to the problems that science currently faces. The solution to those problems is to be political, not technological, and we know it. And please, please, my dear fellow scientists, stop with the anthropomorphic talk. It’s an algorithm, it’s not a person, and you should know it.
All this comes in addition to the many other ethical issues that so-called AI raises, on which a number of talented scholars have written at length (a few pointers: Emily Bender, Abeba Birhane, Olivia Guest, Iris van Rooij, Melanie Mitchell, Gary Marcus).
in Romain Brette on 2025-12-02 10:32:59 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
in For Better Science on 2025-12-02 06:00:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
in The Transmitter on 2025-12-02 05:00:06 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
in The Transmitter on 2025-12-02 05:00:02 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
The FDA is reportedly mulling changes that could make childhood vaccines less accessible and more expensive
in Scientific American on 2025-12-01 21:10:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
The company behind TikTok is rolling out a smartphone AI assistant that behaves less like an app and more like a secretary
in Scientific American on 2025-12-01 19:22:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
in Science News: Health & Medicine on 2025-12-01 16:00:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
The Convention on International Trade in Endangered Species of Wild Fauna and Flora on Friday updated its regulation and monitoring of several iconic shark and ray species
in Scientific American on 2025-12-01 16:00:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
in Women in Neuroscience UK on 2025-12-01 15:00:35 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
in Science News: Health & Medicine on 2025-12-01 14:00:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
New WHO guidance calls for a worldwide obesity treatment “ecosystem” to ensure that GLP-1 weight-loss drugs are used fairly
in Scientific American on 2025-12-01 13:45:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -

The influential citation database Scopus has delisted three journals from Iraq in a blow to recent government efforts to boost the standing of the country’s scholarly publications. One of the titles, which was included in Clarivate’s Web of Science, was dropped from that index as well.
Last month we reported on allegations that one of the delisted journals, the Medical Journal of Babylon, a publication of the University of Babylon in Hilla, was coercing authors to cite its articles. Citation manipulation is widespread in Iraq and elsewhere, but is considered a form of scientific misconduct.
“The Medical Journal of Babylon was flagged for re-evaluation at the end of September when we received concerns, and because we observed outlier publication performance,” said a spokesperson from Elsevier, which owns Scopus. The publisher marked the journal as delisted in its October update of indexed and delisted titles.
Elsevier also removed from its database the Diyala Journal of Medicine, a publication of the University of Diyala in Baqubah, and the Iraqi Journal of Agricultural Sciences, which is published by the University of Baghdad.
Meanwhile, Clarivate has dropped the Iraqi Journal of Agricultural Sciences from its Master Journal List, according to a November 17 update from the company. The title was listed in Web of Science’s Emerging Sources Citation Index.
The “journal was removed because it no longer meets our publicly available quality criteria,” a Clarivate spokesperson told us. “Specific details of our observations and which criteria were failed are shared with the publisher in advance of removal from the MJL. However, we do not share these details with other parties.”
None of the editors-in-chief of the three publications responded to our requests for comment.
The Iraqi government has worked to strengthen the international rankings of its universities and scientific journals, which are increasingly getting indexed in Scopus and, to a lesser extent, Web of Science. But the country allocates little funding to research, and academics say scientific misconduct is widespread, as it is in other parts of the world.
In October, we wrote about an Iraqi university that required students to cite its journals to graduate. We also described how the chief editorial adviser of the Medical Journal of Babylon, Alaa H. Al-Charrakh of the University of Babylon, had asked a prospective author to cite three published papers in the journal, apparently as a condition for accepting the manuscript.
Al-Charrakh told us at the time he did “not remember writing a letter with this content.” But elsewhere he acknowledged asking authors to cite his publication in their papers, as reported on November 6 in a story about the journal’s delisting in the newsletter FraudFactor.
Al-Charrakh lashed out in a November 4 post on Facebook at the anonymous people who alerted Scopus and Retraction Watch to their concerns about academic publishing in Iraq. FraudFactor called the post an “unhinged rant,” with Al-Charrakh referring to his critics as “half-men and quasi-women” and “spiteful people with psychological complexes.” He also said the whistleblowers should be brought to Iraq and held responsible for harming “Iraq’s academic reputation and its prominent position among regional and international countries.”
Al-Charrakh did not respond to a request for comment for this story.
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on X or Bluesky, like us on Facebook, follow us on LinkedIn, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at team@retractionwatch.com.
in Retraction watch on 2025-12-01 11:00:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
in The Transmitter on 2025-12-01 05:00:40 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
Combining newer neural networks with older AI systems could be the secret to building an AI to match or surpass human intelligence
in Scientific American on 2025-11-29 13:00:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -

Dear RW readers, can you spare $25?
The week at Retraction Watch featured:
Did you know that Retraction Watch and the Retraction Watch Database are projects of The Center of Scientific Integrity? Others include the Medical Evidence Project, the Hijacked Journal Checker, and the Sleuths in Residence Program. Help support this work.
Here’s what was happening elsewhere (some of these items may be paywalled, metered access, or require free registration to read):
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on X or Bluesky, like us on Facebook, follow us on LinkedIn, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at team@retractionwatch.com.
in Retraction watch on 2025-11-29 11:00:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
in Science News: Health & Medicine on 2025-11-28 15:30:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
Mars is passing behind the sun, giving NASA's Perseverance rover a view of the star’s far side
in Scientific American on 2025-11-28 13:00:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
A technique called interferometry can greatly magnify tiny objects on the sky, and is powerful enough to reveal the surfaces of nearby stars
in Scientific American on 2025-11-28 11:45:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
in For Better Science on 2025-11-28 06:00:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
Two new studies dig into the long, curving path that cats took toward domestication
in Scientific American on 2025-11-27 19:00:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
in Women in Neuroscience UK on 2025-11-27 15:00:29 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
in The Transmitter on 2025-11-27 05:00:18 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
Hidden beneath the hills of southern China, the JUNO observatory shows promise in solving neutrino mysteries
in Scientific American on 2025-11-26 19:40:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
A team of researchers in Belgium got more than they expected when they tried to study potato pathogens: an unwelcome contaminant, a retraction, and a new paper the authors say is “an improvement over the first.”
In a now-retracted paper in Metallomics originally published in January 2024, researchers at the University of Liège described how availability of nitrogen and iron changes how the bacterium responsible for potato common scab interacts with each of the two microorganisms responsible for early and late blight while infecting the same host. The results suggested the bacterium had antimicrobial properties against both microbes, with nearly all of the experiments focusing on the fungus that causes late blight.
After publication, the authors realized their strain of the late blight-causing microorganism was not the one described in the paper. While the lab originally received the correct strain, “the plates became contaminated, and the fungal contaminant eventually overgrew the strain we intended to study,” Sébastien Rigali, corresponding author and professor at the University of Liège, told Retraction Watch.
Sample contamination in the lab is an ever-present problem for researchers and has led to the retraction of multiple papers. For studies using cell lines, resources such as the ICLAC Register of Misidentified Cell Lines and Research Resource Identifiers can help scientists prevent the use of tainted cells. Fungal contamination during preparation and experiments is such a common problem that many cell culture guidelines include a section on how to handle it.
For the fungus cultures used in the Metallomics study, genome sequencing confirmed the actual strain wasn’t the one described in the paper. “Because our work focused on macroscopic phenotypes, we did not initially notice the contamination; the original strain and the contaminant looked similar on Petri dishes,” Rigali told us in an email. Later, he examined the samples using electron microscopy and “realized the morphology was not as expected.”
He and his collaborators contacted the journal to request the retraction of the paper, as “the data themselves were scientifically sound, but the biological narrative was wrong because we were unknowingly working with an unintended organism.”
“Even with great care and the best working conditions, mistakes can still occur,” said Rigali. “What would have been far worse is failing to detect the contamination and allowing an incorrect story to remain published.”
The paper was retracted in April 2024, less than four months after its publication.
“All collaborators responded unanimously that ‘these things happen,’ emphasizing that what truly mattered was that we retracted the paper almost immediately after publication and submitted a revised version, which we all agree is an improvement over the first,” Rigali said.
The replacement paper was published in Metallomics in October 2024. The retraction notice was updated in November 2025 to note the replacement, “which now accurately refers to the correct strain,” the notice states. The new paper’s “objectives and context are properly aligned and more focused on specific environmental nutrients that trigger the antifungal activity of Streptomyces scabiei”— the potato scab-causing bacteria — “through iron deprivation.”
Often the retraction, replacement, and notice are published simultaneously in cases like these, Katherine J. Franz, editor-in-chief of Metallomics, told us. “This time there was a gap between the retraction and the revised version being published, and the revised version is now available,” she said.
Franz added, “My advice to other scientists who may find similar problems is to deal with the issue professionally and honestly so as to uphold the rigor of the scientific process.”
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on X or Bluesky, like us on Facebook, follow us on LinkedIn, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at team@retractionwatch.com.
in Retraction watch on 2025-11-26 18:56:36 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
Newly identified bones tie the mysterious Burtele foot to a new Australopithecus species that lived alongside Lucy more than three million years ago
in Scientific American on 2025-11-26 16:10:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
in The Transmitter on 2025-11-26 16:00:55 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
The presence of electrical activity has implications for surface chemistry, future human exploration and habitability on the Red Planet
in Scientific American on 2025-11-26 16:00:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
A minor earthquake struck California in the early hours of the morning on November 26
in Scientific American on 2025-11-26 15:48:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
in Science News: Psychology on 2025-11-26 14:00:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
in OIST Japan on 2025-11-26 12:00:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
Wild turkeys once nearly disappeared, but today they’re thriving.
in Scientific American on 2025-11-26 11:00:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
in The Transmitter on 2025-11-26 05:00:27 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
Skipping meals before a big holiday feast probably isn’t the best idea for gut health, experts say. Here’s how to prevent overeating on an empty stomach—and tips for if you do
in Scientific American on 2025-11-25 19:50:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
Rebecca Sear is on a mission to convince publishers to retract articles that use a database that purports to rank countries based on intelligence.
To maintain the integrity of scientific literature, the professor of psychology at Brunel University of London and her colleagues are writing to journals that are publishing papers that rely on the so-called National IQ database, which aims to rank countries based on intelligence. It has drawn criticism for the way the data were collected. Sear’s efforts have so far led to two retractions.
“There is absolutely no scientific merit whatsoever in the National IQ database,” Sear told Retraction Watch. “That means that any conclusions drawn from the database will be faulty and worthless.”
The database was first published in 2002 after psychologists Richard Lynn and Tatu Vanhanen constructed what they claimed were averaged estimates of IQ scores for different countries. Critics say the database fueled networks of “race science” activists who argue Western countries are under threat from certain ethnic groups with low intelligence and higher propensity to commit crimes.
In 2019, Lynn, a self-proclaimed “scientific racist,” was stripped of his emeritus status by Ulster University in Northern Ireland after students protested against his views, as we reported. By our count, three of Lynn’s papers have been flagged with expressions of concern.
While Sear hasn’t tracked how many papers Lynn — who died in 2023 — himself authored, she is tracking the number of studies using his dataset. She shared a list with us that as of now contains 174 such studies.
The latest retraction was issued November 2 by Cross-Cultural Research, which pulled a 2023 study Sear had flagged. The paper, “Likely Electromagnetic Foundations of Gender Inequality,” has been cited twice, according to Clarivate’s Web of Science.
The retraction notice doesn’t identify Sear by name but acknowledges she raised the concerns. It states:
Given the concerns raised by the reader, and that the original round of peer review did not meet the journal’s standards, the Journal Editor and Sage conducted a post-publication peer review of this article. Sage contacted the author for comments on the concerns raised.
Study author Federico R. León of the San Ignacio de Loyola University in Lima, Peru, agreed to the retraction, the notice states. He did not respond to our request for comment.
The other retraction prompted by Sear’s reporting, which we covered when it occurred last year, was for a 2010 paper about intelligence and infections published by the Proceedings of the Royal Society B.
Sear told us she has written to editors at 18 journals that have published papers using the IQ database. So far, however, her efforts have led to just those two retractions.
“The others, I was either ignored or just brushed off by the editors or publishers concerned on the whole,” she said. “I genuinely thought that when concerns were raised about research integrity of papers, something would be done, and that’s not the case.”
The dataset isn’t well known outside psychology, Sear noted, so editors from journals from other disciplines may not know that it is “fundamentally flawed,” she said. According to one 2010 critique, which failed to replicate the database’s low IQ estimates for Africans, Lynn’s methods for selecting data were “unsystematic” and “too unspecific to allow replication,” the authors wrote.
Jelte Wicherts, a statistician and methodologist at Tilburg University in the Netherlands who coauthored that 2010 critique, told us that although Lynn’s colleague, David Becker, has since addressed some methodological issues, the updated iterations of the database are still flawed.
“Becker’s NIQ database inherited many of the fundamental flaws in Lynn’s original national IQ work that we showed quite clearly to be unsystematic and biased towards Lynn’s expectations,” Wicherts told us. “Until it has been validated with rigorous means, I would not recommend the use of Becker’s IQ data in peer-reviewed research.”
Becker, now based at the Chemnitz University of Technology in Saxony, Germany, did not respond to a request for a comment.
However, one researcher who has multiple studies on Sear’s list and spoke to Retraction Watch on the condition of anonymity, told us:
At the time those papers were written, I was not fully aware of the depth of the controversy on that topic and its connection to racialized interpretations of intelligence. Researchers in developing countries are far from such debates, they are under more pressure to publish more research papers. So, my focus then was on contributing to empirical growth and development research, not on questions of race. I stopped using it and publishing papers in impact factor journals and have not relied on it in my more recent research.
The Lynn dataset never met the “minimal standards” for being published, even when it was collected, said Gregory Kohn, an associate professor of psychological and brain sciences at the University of North Florida. “I think there is a burgeoning consensus that is long overdue, that this dataset was not collected in a disciplined way to make any reasonable conclusions,” Kohn said.
Last December, publishing giant Elsevier said it was reviewing papers its journals had published in the past using the dataset. According to the Guardian, Lynn had published more than 100 papers in Elsevier journals, including several iterations of the NIQ database.
An Elsevier spokesperson told us: “In line with our commitment to investigate this matter thoroughly, we invited prominent members of the scientific community to aid us in gathering a consensus on the flaws in the national IQ database and other similar projects. This piece of work is nearing completion.”
One study on Sear’s list is a 2019 paper published by the journal Intelligence, which explored the link between national IQ and scores on a graduate admissions exam that was co-authored by psychologist Bryan Pesta.
On November 4, a U.S. appeals court dismissed an appeal from Pesta, who was stripped of his tenure and fired by Cleveland State University after his colleagues claimed he engaged in research misconduct by misrepresenting his intended use of data from the National Institutes of Health to advance a theory that genetic differences lead to a racial IQ gap.
While Pesta has argued his academic freedom had been violated, the court found that, “Whatever the controversial nature of [Pesta’s work], CSU officials were reasonably alarmed by Pesta’s cavalier handling of sensitive genomic data, misleading representations to the NIH about the nature of his research, failure to observe basic conflict-of-interest reporting, and the impact that his actions had on CSU as a research institution reliant on the NIH.”
A spokesperson for Cleveland State told us:
The ruling confirms that the university and its employees acted properly and that the law and facts support our position. We strongly believe our faculty are entitled to full freedom in their research, but they must adhere to the highest standards of honesty, integrity and professional ethics.
Sear said journals should retract every paper based on Lynn’s database. “Every paper that is published using the database effectively justifies the use of the database,” she said.
Lynn’s work is systematically biased, Sear said, because it has unrepresentative samples — with relatively higher rural populations and numbers of children for some nations — leading to a skewed picture.
“So as long as papers which have used the database sit in literature, that will make it easier for people to continue using a worthless database,” she says. “Retracting these articles is particularly important in order to essentially stop the continued use of the dataset.”
Like Retraction Watch? You can make a tax-deductible contribution to support our work, follow us on X or Bluesky, like us on Facebook, follow us on LinkedIn, add us to your RSS reader, or subscribe to our daily digest. If you find a retraction that’s not in our database, you can let us know here. For comments or feedback, email us at team@retractionwatch.com.
in Retraction watch on 2025-11-25 18:24:22 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
A new federal initiative aims to accelerate scientific discovery by uniting artificial intelligence with large federal datasets
in Scientific American on 2025-11-25 18:10:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
Human brains go through five distinct phases of life, each defined by its own set of characteristics, according to a new study
in Scientific American on 2025-11-25 16:00:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
Scientific American asked experts which type of Thanksgiving pie spikes blood sugar the most—and how to eat healthier while still enjoying the holidays
in Scientific American on 2025-11-25 13:00:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
in OIST Japan on 2025-11-25 12:00:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
As AI slips into kitchens, conversations and memories, Thanksgiving has become a test of how much we’re willing to outsource
in Scientific American on 2025-11-25 12:00:00 UTC.
- Wallabag.it! - Save to Instapaper - Save to Pocket -
An enigmatic group of fossil organisms has finally been identified—and is changing the story of how plants took root on land
in Scientific American on 2025-11-25 11:30:00 UTC.