Papers are still at the centre of how science is shared and assessed today. But the system that supports them is under pressure. The classic evaluation model based on bibliometric output is losing credibility, while new forms of fraud grow and more structured disinformation initiatives appear.
This was the main thread of the scientific session “From impact factor to disinformation”, led by Jordi Camí, PRBB Director General, and organised by the PRBB Good Practices Group last November.
A model “cornered” by its own rules
In his talk, Camí connects topics that many researchers will recognise: pressure to publish, evaluation models that still rely heavily on publication output, and a scientific culture where the rush to publish can end up outweighing rigour, transparency, and genuinely novel contributions to knowledge.
Camí also raises an uncomfortable but very current idea: when scientific careers depend mostly on metrics and publication volume, the system becomes more vulnerable. Not only because it encourages shortcuts, but also because it makes it easier for external actors to exploit weaknesses in the editorial circuit. In Camí’s view, we are no longer talking only about isolated cases of misconduct, but about an environment where integrity becomes a systemic challenge — and where the responses should be systemic too.
When fraud becomes a business
The session also focused on the so-called “fraud industry”, which Camí says keeps growing and evolving, putting at risk both the credibility of the scientific community and that of academic institutions. As he explained, every year new journals are created to inflate so-called “self-citations”, including fake profiles on platforms like ResearchGate and publications that recycle content — or plagiarise it — to increase an institution’s number of papers and citations.
“The main reference system for scientific communication — papers — is going through an unprecedented integrity crisis.”
Jordi Camí, PRBB Director
In practical terms, this means institutions need to think beyond detecting and penalising fraud. They need to actively create an environment that supports good practice: training, clear procedures, and incentives and recognition based on quality and integrity, not only quantity. This is one of the key messages in the PRBB Code of Good Scientific Practice, which emphasises institutional responsibility — not just individual responsibility — in safeguarding scientific integrity.
Generative AI: a new “mutation” of the problem
On top of this landscape, there is a new element changing the pace of everything: generative Artificial Intelligence (AI), able to produce text, images or scientific-looking materials at high speed. Camí describes this moment as a “new mutation” of the scenario, forcing us to rethink controls, transparency and a culture of good practice.
This does not mean AI is the problem. But it does mean the system needs to talk seriously about AI ethics, set stricter rules with clear and transparent standards, and strengthen a culture where accountability does not get diluted.
At this point, it is hard not to ask whether today’s scientific publishing model has its days numbered.
Can research evaluation survive without changing?
Camí closes the circle with a question many institutions face right now: if credibility is under doubt, does it make sense for research evaluation to continue as it is? And if not, what solid alternatives do we have?
On the evaluation side, some alternatives are already being used. For example, narrative CVs and other approaches that aim to recognise contributions beyond a publication list.
This debate did not start today. For years, international initiatives have called for moving away from impact factor and other shortcuts used to measure quality. The best known is the San Francisco Declaration on Research Assessment (DORA), which recommends not using journal-based metrics — like the Journal Impact Factor — as a substitute for assessing the quality of an article or a researcher’s contribution.
In Europe, the most recent push comes from the Agreement on Reforming Research Assessment promoted by the Coalition for Advancing Research Assessment (COARA). It proposes widening what is valued (diverse outputs, open science practices, collaboration, integrity, impact, etc.) and reducing incentives that can end up feeding bad practice.
If you are interested in the topic, you can find more content on El·lipse under the tag Good Scientific Practice.




