ploynog a day ago

Double-blind review is a mirage that does not hold up. While I was in academia I reviewed a paper that turned out to be a blatant case of plagiarism. It was a clear Level 1 copy according to the IEEE plagiarism levels (Uncredited Verbatim Copy of more than 50% of a single paper). I submitted all of these findings with the original paper and what parts were copied (essentially all of it) as my review.

A few days later I got an email from the author (some professor) who wanted to discuss this with me, claiming that the paper was written by some of his students who were not credited as authors. They were unexperienced, made a mistake, yaddah yaddah yaddah. I forwarded the mail to the editors and never heard from this case again. I don't expect that anything happened, the corrective actions for a level-1 violation are pretty harsh and would have been hard to miss.

The fact that this person was able to obtain my name and contact info shattered any trust I had in the "blind" part of the double-blind review process.

The other two reviewers had recommended to accept the paper without revisions, by the way.

  • 0_____0 a day ago

    This seems like an issue of administration rather than an issue with the idea of a double-blind review. If you conduct a review that isn't properly blinded, and doesn't have an observable effect, can it really be called a double blind review?

    • velcrovan a day ago

      Maybe more that a non-idealistic model of the real world, and common direct experience, show that incentives strongly favor an administrative approach that compromises the double blind.

      • 0_____0 a day ago

        Unless there's a better way to do it, I this shows a need for better structures for governance and auditing of review boards... Information and science care not for our human folly, it's up to us to seek and execute them properly.

  • rors a day ago

    I remember attending ACL one year, where the conference organisers ran an experiment to test the effectiveness of double blind reviews. They asked reviewers to identify the institution that submitted the anonymised paper. Roughly 50% of reviewers were able to correctly identify the institutions. I think there was a double digit percentage of being able to predict authors.

    The organisers then made the argument that double blind was working because 50% of papers were not identified correctly! I was amazed that even with strong evidence that double blind was not working, the organisers were still able to convince themselves to continue with business as usual.

    • wtallis a day ago

      You're saying "not working" when you only have presented evidence for "not perfect".

      That experiment showed that even when asked to put effort into identifying the source of an anonymized paper—something that most reviewers probably don't put any conscious effort into normally—the anonymization was having a substantial effect compared to not anonymizing the papers.

      Am I missing some obvious reason why double-blind reviews should only be attempted if the blinding can be achieved with a near-perfect success rate, or are you just setting the bar unreasonably high?

      • gopher_space 21 hours ago

        The subtext to this whole comment chain is that you need to have hands-on experience with qualitative to quantitative conversions if you want to reason about the scientific process.

        > Am I missing some obvious reason why double-blind reviews should only be attempted if the blinding can be achieved with a near-perfect success rate, or are you just setting the bar unreasonably high?

        OP thinks you are looking at either signal or noise, instead of determining where the signal begins for yourself.

  • smallmancontrov a day ago

    When we criticize without proposing a fix or alternative, we promote the implicit alternative of tearing something down without fixing it. This is often much worse than letting the imperfect thing stand. So here's a proposal: do what we do in software.

    No, really: we have the same problem in software. Software developers under high pressure to move tickets will often resort to the minor fraud of converting unfinished features into bugs by marking them complete when they are not in fact complete. This is very similar to the minor fraud of an academic publishing an overstated / incorrect result to stay competitive with others doing the same. Often it's more efficient in both cases to just ignore the problem, which will generally self-correct with time. If not, we have to think about intervention -- but in software this story has played out a thousand times in a thousand organizations, so we know what intervention looks like.

    Acceptance testing. That's the solution. Nobody likes it. Companies don't like to pay for the extra workers and developers don't like the added bureaucracy. But it works. Maybe it's time for some fraction of grant money to go to replication, and for replication to play a bigger role in gating the prestige indicators.

    • ramblenode 20 hours ago

      > This is very similar to the minor fraud of an academic publishing an overstated / incorrect result to stay competitive with others doing the same.

      I completely disagree.

      For one, academic standards of publishing are not at all the same as the standards for in-house software development. In academia, a published result is typically regarded as a finished product, even if the result is not exhaustive. You cannot push a fix to the paper later; an entirely new paper has to be written and accepted. And this is for good reason: the paper represents a time-stamp of progress in the field that others can build off of. In the sciences, projects can range from 6 months to years, so a literature polluted with half-baked results is a big impediment to planning and resource allocation.

      A better comparison for academic publishing would be a major collaborative open source project like the Linux kernel. Any change has to be thoroughly justified and vetted before it is merged because mistakes cause other people problems and wasted time/effort. Do whatever you like with your own hobbyist project, but if you plan for it to be adopted and integrated into the wider software ecosystem, your code quality needs to be higher and you need to have your interfaces speced out. That's the analogy for academic publishing.

      The problems in modern academic publishing are almost entirely caused by the perverse incentives of measuring academic status by publication record (number of publications and impact factor). Lowering publishing standards so academics can play this game better is solving the wrong problem. Standards should be even higher.

    • buescher 18 hours ago

      Yeah, the alternative to a double-blind review that isn't a double-blind review is a double blind review.

      The alternative to not enforcing existing rules against plagiarism is to enforce them.

      The alternative to ignoring integrity issues i.e."minor fraud" in the workplace is to apply ordinary workplace discipline on them.

  • tensor 21 hours ago

    Seems to me that the review worked, you caught the plagiarism, even though the other two missed it. It's disturbing that somehow the paper author found your contact information though!

GuestFAUniverse a day ago

I am a co-author on a paper I never asked for, but my supervisor insisted, because the petty idea upgrading his desperate try to the point of "considerable at all" came from me. It was a chair which normally prouded itself of only publishing in the most highly regarded journals of its field (internally graded A, B, C). They had a few years without viable paper. Desperate to publish. From my POV this was a D. The paper is worthless crap, hastily put together within two weeks. It should have been obvious to the reviewers.

I feel ashamed that my name is on it. I wish I could retract it.

So, yes please: make it hard to impossible for paper mills and kill the whole publish or perish approach.

  • osrec a day ago

    I can't make sense of your first sentence. Can you rephrase it please?

    • lqet a day ago

      OP's supervisor wrote a paper without any merit. OP then provided a "petty idea" that made the paper "considerable at all". That's how he ended up as co-author.

      • GuestFAUniverse a day ago

        Thanks, exactly as I wanted it to be understood.

  • tovej a day ago

    I have a similar experience. We had a truly terrible paper written as a collaboration with a team from the US on a software project, integrating their "novel" and "innovative" component. The component took 1 hour to compile, the architecture made no sense, and the US professor constantly talked about nothing but high-flying marketing concepts. I managed to hack together a demo using their component, fixing build bugs and design flaws (the ones I could do something about).

    The proof of concept worked, but it wasn't doing anything new. We were just doing what we used to do, but now this terrible component was involved in it, making everything slower and more complicated.

    Somehow that became a paper, and somehow this paper passed review without a single comment (my feeling is it's because of the professor's name recognition). I'm ashamed to have my name on that paper.

  • cgcrob a day ago

    If it’s any consolation I split up with an ex partner after she wanted to put me as a co-author on a pseudoscience bullshit paper that she was working on to try and hit her quota. Her entire field, in the social sciences, is inventing a wild idea and using meta analysis to give it credibility. Then flying to conferences and submitting expenses.

    I contributed nothing other than a statistical framework which was discarded when it broke their predefined conclusion.

    • anoncow a day ago

      When research is just means to an end...

      I think as children if we are taught what earning a living means, people who only want to make ends meet would try to do it using other less damaging methods. For e.g., sales and marketing are not bad places for such people. When it comes to research people should know that perhaps money will not be great.

      It is because we aren't aware of the full picture as children, we follow our passions (or we follow cool passions) and then realise that money is also important and then resort to unethical means to get that money. Let's be transparent about hard fields with children so that when they enter such fields they know what they are getting into.

      • somenameforme a day ago

        I suspect there's a lot of people that end up pursuing research because they enjoy college and learning with the idea of seeking out a job sounding rather less enjoyable, and more education will just equal more $$$ in said job anyhow, right?

        In the past this wasn't an issue because university was seen as optional, now in most places it's ostensibly required to obtain a sufficiently well paying job, and so much more of society ends up on a treadmill that they may not really want to hop off of.

      • acuozzo a day ago

        > so that when they enter such fields they know what they are getting into

        I don't think this would help. IMO, it's a money vs. effort thing. Yes, real research is hard, but if someone learns early on that the system can be easily gamed, then the required effort is relatively low.

        Plus, there's the friction factor. Moving from undergrad to grad to post-grad to professor keeps you within an institution you know.

        The game is this: get hired at a research university and pump out phony papers which look legit enough to not raise any suspicions until you get tenure. Wrap the phoniness of each paper in a shroud of plausible deniability. If anything comes out after you're tenured, then just deny and/or deflect any wrongdoing.

      • cgcrob a day ago

        Yeah that's about right.

        I think in some fields you walk into them with some kind of noble ideology, possibly driven by marketing but then you find out it's all bullshit and you're n-years into your educational investment then. Your options are to shrug and join in or write everything off and walk away.

        I don't blame people for taking advantage of it but in some areas, particularly health related, there are consequences to society past financial concerns.

    • archi42 a day ago

      I treat all social science degrees as "likely bullshit" these days. Could as well be astrology.

      A few computer science friends of mine worked at a social science department during university. Their tasks included maintaining the computers, but also support the researchers with experiment design (if computers were involved) and statistical analysis. They got into trouble because they didn't want to use unsound or incorrect methods.

      The general train of thought was not "does the data confirm my hypothesis?" but "how can I make my data confirm my hypothesis?" instead. Often experiments were biased to achieve the desired results.

      As a result, these scientific misconduct was business as usual and the guys eventually quit.

      • BeetleB a day ago

        Let me introduce you to theoretical condensed matter physics, where no one cares if the data confirms the hypothesis, because they are writing papers about topics that very likely can never be tested.

        At least in the social sciences there is an expectation of having some data!

        • araes a day ago

          That's actually the part about people constantly negging on social sciences [1] that I often find confusing.

          There's huge amounts of data available (geography, lots and lots of maps; history, huge amount of historical documentation; economics, vast amounts of public datasets produced every month by most governments; political science, censuses, voting records, driver registrations, political contest results all over the Earth - often for decades if not centuries).

          Most is relatively well verified, and often tells you how it was verified [2]. Often it's obtainable in publicly available datasets that numerous other researchers can verify was obtained from a legitimate source. [3][4][5][6][7][8][9][10][11][12]

          There's lots of data available. Much is also verifiable in a very personal way simply by walking somewhere and looking. In many ways, social sciences should be one of the most rigorous disciplines in most of academia.

          [1] Using Wikipedia's grouping on "social sciences" (anthropology, archaeology, economics, geography, history, linguistics, management, communication studies, psychology, culturology and political science): https://en.wikipedia.org/wiki/Social_science

          [2] Census 2020, Data Quality: https://www.census.gov/programs-surveys/decennial-census/dec...

          [3] Economic Indicators by Country: https://tradingeconomics.com/indicators

          [4] Our World in Data (with Demographics, Health, Poverty, Education, Innovation, Community Wellbeing, Democracy): https://ourworldindata.org/

          [5] Observatory of Economic Complexity: https://oec.world/en

          [6] iNaturalist (at least from a biological history perspective): https://www.inaturalist.org/taxa/43577-Pan-troglodytes

          [7] Coalition for Archaeological Synthesis, Data Sources: https://www.archsynth.org/resources/data-sources/

          [8] Language Goldmine (linguistics datasets): http://languagegoldmine.com/

          [9] Pew Research (regular surveys on economics, political science, religion, communication, psychology - usually 10,000 respondents United States, 1000 respondents international): https://www.pewresearch.org/

          [10] Marinetraffic (worldwide cargo shipping): https://www.marinetraffic.com/en/ais/home/centerx:-12.0/cent...

          [11] Flightradar Aviation Data (people movement): https://www.flightradar24.com/data

          [12] Windy Worldwide Web Cameras: https://www.windy.com/?42.892,-104.326,5,p:cams

          • autoexec 19 hours ago

            People who hate "social science" are surely targeting too wide, but there's plenty of terrible research hiding under that umbrella that relies exclusively on social media/internet surveys/self-reported data and absolutely deserves criticism.

            • archi42 6 hours ago

              Since I expressed negative feelings, my thoughts on this:

              I wouldn't say I hate social science, that's much too strong. The rampant fraud and poor method in several of the fields just means that I put less value in peoples' academic achievements than they deserve - which I don't like, because many surely sincerely tried to do good science and spent years on getting there, but I can't filter a priori in which camp a person belongs. They should not be defunded or stuff like that, but they need to get their act together. Somehow. I suspect a lot of this is driven by extrinsics (publish or perish; need an advanced degree to get a job, but the advanced degree is actually pointless for the job; probably more things I don't think of now), and those need to change to allow for good science.

              Take for example the department I mentioned above, that's essentially commiting fraud. Word is, the professor running it is actually pretty damn good at what they do. They have an accepted grant application framed on the wall: "I need 2000 bucks. Signed Professor Foobar" (like 5000$ in today's money); times surely changed for the worse for them. And I pity that, since we're often (but not always of course) talking peanuts in many of those fields. Especially for Masters level research, or for a single paper.

              But I judge people in my life by their competence and character anyway, not by their degree. So politely ignoring their degree has little to no adverse effect on how I interact with them.

          • BeetleB 20 hours ago

            A lot of psychology research involves data not from these datasets, though.

            The complaint is that their data often doesn't strongly support the hypothesis, and dubious statistical techniques are performed to make it appear otherwise. And just poor statistics abilities (not malicious intent).

            Physicists get away with it because they often just don't do any statistics. Often the data aligns so well with the hypothesis that you don't need any sophisticated techniques, or their work doesn't involve any data (like my example in my prior comment).

            Most US trained physicists have never taken a course in statistics. It's not in the curriculum in most universities. When I was in school and would point it out, the response was always "Why do we need a whole course in statistics? We learn it in quantum mechanics."

            No. That's probability you learn. Not statistics.

            In social sciences (and medicine) people take a lot more statistics courses because the systems are much more complex than typical physics systems. A lot more confounding variables, etc. They simply need more statistics.

            (Yes, yes. I know. There's probably some experimental branch in physics where people actually do use statistics. But most don't).

          • cgcrob 17 hours ago

            I’m not ragging on the whole field. If I narrow it down too much they’ll know who I am and you will know who they are.

            I’ll reduce it to a part of psychology.

      • cess11 a day ago

        Sounds like economics.

        Research fraud is common pretty much everywhere in academia, especially where there's money, i.e. adjacent to industry.

        • ninalanyon a day ago

          It does rather depend on the industry. Research in fields relevant to electrical engineering are much less likely to be fraudulent because the industry actually uses the results to make the products and the customers depend on those products working as specified.. If you discover a better and cheaper ceramic insulator you can be confident that transformer manufacturers will take it up but the big companies are well stocked with experts in the field so a fraudulent paper will quickly be spotted.

          • cess11 21 hours ago

            Graphene in electrical engineering is a staple of every (dis)reputable papermill.

            • genewitch 18 hours ago

              "New battery tech promises 1.5x density, no fire risk, 20 year lifespan"

      • cgcrob a day ago

        Glad to know they quit. That's exactly what I observed, except it was probably worse if I think back at it. I'm a mathematician "by trade" so was sort of pulled into this by proxy because they were out of their depth in a tangle of SPSS. Not that I wasn't but at least I have conceptual framework in which to do the analysis. I had no interest or knowledge of the field but when you're with someone in it you have to toe the line a little bit.

        Observations: Firstly inventing a conclusion is a big problem. I'm not even talking about a hypothesis that needs to be tested but a conclusion. A vague ambiguous hypothesis which was likely true was invented to support the conclusion and the relationship inverted. Then data was selected and fitted until there was a level of confidence where it was worth publishing it. Secondly they were using very subjective data collection methods by extremely biased people then mangling and interpolating it to make it look like there was more observation data than there was. Thirdly when you do some honest research and not publish because it looks bad saying that the entire field is compromised for the conference coming up which everyone is really looking forward to and has booked flights and hotels already.

        If you want to read some of the hellish bullshit, look up critique of the Q methodology.

    • BoingBoomTschak 4 hours ago

      I'm not a fan of flaming, but I have to get it out: where are the people screaming "THE SCIENCE IS SETTLED!" in any thread involving politically-relevant science? How can they read stuff like this constantly posted on HN then just trust sociology/psychology du jour to tell them what to think (or more likely, to help them justify what was already in their head)? Is Gell-Mann amnesia that potent?

    • meindnoch a day ago

      Luckily this made-up social science trash won't be used as evidence when shaping our policies, so it's pretty harmless! /s

      • comfysocks a day ago

        To be fair, lobbyists will use phony science from any field to influence policy, not just the social sciences. Think of the tobacco industry.

      • schnable a day ago

        The science is settled, bro.

  • saagarjha a day ago

    As a co-author are you not able to do so?

    • psychoslave a day ago

      In theory $SYSTEM is the most excellent thing that humanity could ever hope and everyone knows they that by acting in accordance with the stated expected behaviors, they will act in the best way they can think of to achieve the best result for everybody.

      In practice people see that $SYSTEM is rotten and most likely to doom everyone on the long span, with increasingly absurd actions accepted silently on the road. But they also have the firm conviction that not bending the knee, be brave and say out loud what’s in everyone mind, will only put them on the fast track to play the scapegoat and change nothing else on the overall.

      Think about it: over-reporting of grain production was a major factor of the great Chinese Famine.

      https://en.wikipedia.org/wiki/Great_Chinese_Famine

      • wadadadad a day ago

        Thank you for providing the link for this- it's greatly interesting to see how such a failure could occur through human means and the significant impact it had, and how it can directly relate to academia (really, many topics, anywhere there is a '$SYSTEM').

        The cover ups in the article were also interesting- a deliberate staging to Mao to prevent uncovering the truth. I'm not sure how this compares directly (is there a centralized authority with power to fix the issue that is being lied to, compared to the decentralized "rotten" system, where the status quo is understood and 'accepted').

    • black_puppydog a day ago

      Technically being able isn't the same as your career surviving actually going through with it.

      • ithkuil a day ago

        Damned if you do and damned if you don't

proto-n a day ago

"For the first time, researchers reading conference proceedings will be forced to wonder: does this work truly merit my attention? Or is its publication simply the result of fraud? [...] But the mere possibility that any given paper was published through fraud forces people to engage more skeptically with all published work."

Well... spending a few weeks reproducing a shiny conference paper that simply doesn't work and is easily beaten by any classical baseline will do that to you in the first few months of your PhD imo. I've become so skeptic over the years that I assume almost all papers to be lies until proven otherwise.

"This surfaces the fundamental tension between good science and career progression buried deep at the heart of academia. Most researchers are to some extent “career researchers”, motivated by the power and prestige that rewards those who excel in the academic system, rather than idealistic pursuit of scientific truth."

For the first years of my PhD I simply refused to parttake in the subtle kinds of frauud listed in the second paragraph of the post. As a result, I barely had any publications worth mentioning. Mostly papers shared with others, where I couldn't stop the paper from happening by the time I realized that there is too little substance for me to be comfortable with it.

As a result, my publication history looks sad and my carreer looks nothing like I wished it would.

Now, a few years later, I've become much better at research and can now get my papers to the point where I'm comfortable submitting them with a straight face. I've also came to terms with overselling something that does have substance, just not as much as I wish it had.

  • huijzer a day ago

    > I've become so skeptic over the years that I assume almost all papers to be lies until proven otherwise.

    I couldn't agree more. I have read a lot of psychology papers during my PhD and I think there is very little signal in the papers. Many empirical papers for example use basically the same "gold standard" analysis, which is fundamentally flawed in many ways. One problem for example is that if you would use another statistical model, then the conclusions would often be wildly different. Another being that the signal is often so weak that you can't use it to predict much (to be useful). If you try to select individuals for example, the only thing you can tell is that the group on average is less neurotic. But for individuals there is no better chance of picking the right one than average. The point of a good paper is to take these sketchy analyses and write a beautiful story around it with convincing speculation. It sounds absurd but take a random quantitative psychology paper and check which percentage of the claims made in the discussion are actually based on the actual data from the paper.

    But the worst part about this is that these problems exist for literally decades. Nobody cares. The funding agencies grade people not on correctness but on the number of citations. As a result, you see that many subcultures exist who's sole existence is about promoting the importance of their subculture. It is quite common in academia to cite someone in the introduction just to "prove" that some idea is worth pursuing. But does it work? Doesn't matter. Just keep writing papers.

    So I'm not saying that all research is bad. I'm saying that indeed most papers are not very useful or correct. Many researchers try, but the incentives are extremely crooked.

    • pbronez a day ago

      How could this be corrected? The scientific community has lost a lot of credibility with the public, and the backlash is obvious in recent policy changes. Fast forward four years. Assume Trump and RFK jr have successfully destroyed the current system. What should replace it?

      How could the Federal government ensure that public monies only fund high quality research? Could policy re-shape the incentives and unlock a healthy scientific sector?

      • iinnPP a day ago

        Consequences with the current system would suffice.

        Ignorance as a defense needs to go too. Ignorance as a defense is too powerful and we should balance it more towards hurting the supposedly ignorant rather than everyone else. Basically, a redefining of wilful ignorance so it's balanced as stated.

      • huijzer 21 hours ago

        I don’t know but these are exactly the right questions to ask!

nis0s a day ago

> Proclaiming that your work is a “promising first step” in your introduction, despite being fully aware that nobody will ever build on it.

Science produces discrete units which can be used in different ways, if not in their exact form from a preceding research. I am not sure it’s reasonable to say that existing ideas, even if not cited, are not inspirational (to the researchers themselves). Peer-review isn’t perfect, but I think that all accepted papers have something academically or scientifically relevant, even if there’s no guarantee that the paper will generate hundreds of subsequent citations. I think improving your subsequent work is more important, which includes mentioning why you think some previous work may not be as relevant anymore. This last step is often missing from many research papers.

I think the author is right that it doesn’t quite make sense to publish anything you know isn’t quite correct. But I can think of several papers in different fields which someone may think are “not quite correct”, but the goal of such papers, I think, is to demonstrate the power of low probability scenarios, or edge cases. Edge cases are important because they break expected behavior, and are often the root cause of system fragility, system evolution, or poor generalization in other systems.

ngriffiths a day ago

I had two experiences at the polar opposites of the spectrum - one research team I worked on had very high standards and was comfortable being patient for material that had value. The other involved an approach that obviously stood no chance to be useful to anyone.

Some differences:

- The first one was in a space with more low hanging fruit

- The first one was after large effect sizes, not the kind where you can massage the statistical model

- The second one was a topic with far higher public interest

- The second one was primarily an analytic project, whereas the first one was primarily experimental

I feel like bad science lives in the middle of a spectrum - you have young fields/subfields with boring but impressive experimental breakthroughs, and on the other end you have highly political questions that have been argued to death without resolution. Bad science is about borrowing some of the strategies used in politics because all the important experiments have already been done.

bjackman a day ago

> Submitting a paper to a conference because it’s got a decent shot at acceptance and you don’t want the time you spent on it go to waste, even though you’ve since realized that the core ideas aren’t quite correct.

I don't see a problem with this? If papers are the vehicle for conference entries why shouldn't authors submit it just because it's wrong? Conferences are for discussion. So go there and discuss it... "My paper says XYZ, but since I wrote it I realised ABC" - sounds like a good talk to me?

(Naivety check: I am not an academic)

  • sideshowb a day ago

    Yes. As the saying goes, If we knew what we were doing it wouldn't be research. Finished papers often have flaws, if you try to write something perfect you may never finish it. They're called limitations and you list them in conclusions and suggest addressing in future work.

    (Experience check: I is one)

  • lgeorget a day ago

    In other fields than computer science, that would be more the case I think because conferences are not given as much importance. Journals are what matter and since these publications take more time and are usually more selective (for the well-known journals at least), you tend to have better science in them. Computer science have few journals and the standard of publication is the conference.

  • light_hue_1 a day ago

    That's not what papers are for. But I can see how not being an academic would make you think that.

    What you're describing are workshops with what we would call non-archival proceedings. Places where you write whatever you want and then talk about it.

    Publications, conference or journal, are supposed to be what are called archival. They are a record of what we've discovered and want to share with the world. They are supposed to be sent into the world after we carefully complete a line of work.

    Publications are not supposed to spam the system with half-baked junk. Sadly, that's what a lot of people are doing these days.

    • foldr a day ago

      Some fields do have the opposite problem, though, where standards for publication are so high that they prevent publication of useful ideas or results that could be built on by other researchers. I don’t think a published paper should have to meet some kind of gold standard of completeness and correctness. It just has to report something new or interesting with any appropriate caveats attached.

      • genewitch 17 hours ago

        I don't know I'm sure Monsanto put out[0] a lot of papers about how effective glyphosate is but another team decided to test glyphosate against the "inert" ingredients in roundup and found glyphosate was actually the weakest pesticide of the group.

        Now, my pet theory is that they knew glyphosate wasn't that great, but talked it up in papers as a sacrificial anode sort of thing "gee shucks it looks like glyphosate based pesticides are harmful to humans (or bees, or fish, or) so we'll stop manufacturing that formulation."

        But, Possibly due to academia, they have fanboys and cheerleaders and I think that's why it's still around and in heavy use even though we're not sure it's a good idea.

        [0] Bayer Monsanto funds studies at agricultural universities.

        P. S. Just watch.

jimbokun a day ago

> Most researchers are to some extent “career researchers”, motivated by the power and prestige that rewards those who excel in the academic system, rather than idealistic pursuit of scientific truth.

This is the funny part. There is little to no power and prestige to be had in the academic system. To a first approximation no one outside academia cares.

I was just working as a staff programmer and taking grad courses with my tuition benefit, and found myself getting caught up in the mentality of needing a PhD to really be successful and valuable. Then got a job in industry making far more money and realized how academia is a small self contained world with status hierarchies irrelevant outside that small world.

  • BeetleB a day ago

    In the social sciences, there is a lot of prestige to be had. Do some ground breaking work, engage with the public via bestselling books, and then get invited by the president to work on policy.

    Even if they don't work for the administration, there are plenty of other bodies that will value them and pay large sums of money (or let them have large influence).

    Very common amongst economists, and more and more common amongst disciplines like psychology.

    Even in technical fields, if you can manage to become a big name, you can do consulting work and get paid quite well.

    > Then got a job in industry making far more money

    This is not a healthy way to look at it.

    The average mechanical engineer isn't making tons of money in industry. A ME professor at a top university likely makes more. A biology major with just a BS degree will make less than the average biology associate professor.

    But more importantly, there's a simpler reason why money is a poor metric to measure: You can always get more money in finance or medicine than as a mechanical engineer. Does it make sense to denigrate a whole profession just because one can make more money elsewhere?

  • vonneumannstan a day ago

    >This is the funny part. There is little to no power and prestige to be had in the academic system. To a first approximation no one outside academia cares.

    They have power over their students and relative power over other Professors. That's plenty enough incentive for most. There can also be fame and fortune for the most famous among them. See Francesa Gino, Dan Ariely, etc.

michaelt a day ago

> And we must ensure that explicit suggestions to modify one’s science in the service of one’s career – “you need to do X to be published”, “you need to publish Y to graduate”, “you need to avoid criticizing Z to get hired” – carry social penalties as severe as a suggestion of plagiarism or fraud.

One of the pernicious things in this area is that, even as we teach young researchers how to avoid making mistakes and engage sceptically with the work of others and that scientific fraud is a nontrivial issue, we also tell them how to commit fraud themselves and that their competition is doing it.

"Watch out for P-hacking, that's where the researcher uses a form of analysis that has a small chance of a false positive, and analyses loads of subsets of your dataset until a false positive arises and just publishes that one"

"Watch out for over-fitting to benchmarks, like a car taking the speed crown by sacrificing the ability to corner"

"Watch out for incomplete descriptions of test setups, like testing on a 'continent-scale map' but not mentioning how detailed a map it was"

"Watch out for citations where the cited paper doesn't say what is claimed, some people will copy-and-paste citations without reading the source paper"

"Watch out for papers using complicated notation, fancy equations and jargon to make you feel this looks like a 'proper' paper"

"Watch out for deceptive choice of accurate numbers, like a study with a 25% completion rate including the drop-outs in the number of participants"

"Watch out for simulations with inaccurate noise models, if the noise is gaussian in the simulation but a random walk in reality, great simulated results won't transfer to reality"

I've made no suggestion at all that you should modify your science or commit fraud - but I've also just trained you in how to do it.

  • proto-n a day ago

    It's really not that hard to come up with ways to commit fraud if you want to. On the other hand, it's very easy to make such mistakes if you don't know to avoid them. This characterization is very misguided IMO.

    • michaelt a day ago

      Ah, perhaps I wasn't clear about what I'm trying to say. I don't think we should stop training researchers in common mistakes and fraudulent methods to watch out for.

      I'm just saying: I don't believe anyone actually tells budding researchers that they should commit fraud. Instead I think the process is probably more like this:

      Year 1: Statistics/research training. Here are a load of subtle mistakes to watch out for and avoid. Scientific fraud happens sometimes. Don't do it, it's very dishonest.

      Year 2: Starting research. Gee a lot of these papers I'm reading are hard to reproduce, or unclear. Maybe fraud is widespread - or maybe they're just smarter or better equipped than me.

      Year 3: "You really ought to have published some papers by now, the average student in your position has 3 papers. If you don't want to flunk out you really need to start showing some progress"

      • proto-n a day ago

        I still disagree. It's more like "omg I should have published at least a few papers by now, what am I doing" and then you start frantically looking for provable things in the dataset. You find one that you can also support with a nice story. Now either a) You were not tought about how or why this is wrong and you publish the paper b) You were, and know that you should collect a separate dataset to test the hypothesis. But also, there is a huge existential pressure to just close your eyes and roll with it.

        It's not that you need to be tought how to cheat, it's that you need to be tought how to avoid unintentionally cheating.

        • genewitch 17 hours ago

          Maybe I'm misreading but you both seem to say the same thing.

          New student is shown how to read a paper, how to spot egregious errors and all the things listed above.

          Student, i guess feels forced to publish. And maybe uses murkier tactics to get the paper published.

          As Dr Frank Etscorn said, "I can show anything correlates to anything else." We were discussing vitamin D papers anf I was testing that paper funding AI mentioned on HN a few times last year.

          • genewitch 13 hours ago

            Jeez, i apologize. "testing that 'paper-finding' AI mentioned"

        • genewitch 17 hours ago

          I can't edit from my phone, maybe the nuance is in your final sentence?

lqet a day ago

> * A group of colluding authors writes and submits papers to the conference.

> * The colluders share, amongst themselves, the titles of each other's papers, violating the tenet of blind reviewing and creating a significant undisclosed conflict of interest.

> * The colluders hide conflicts of interest, then bid to review these papers, sometimes from duplicate accounts, in an attempt to be assigned to these papers as reviewers.

Is it that common that conference reviewers also submit papers to the conference? Wouldn't that alone already be a conflict of interest? (After all, you then have an interest in dismissing as many papers as possible to increase the likelihood of your own paper being accepted). And how do you create "duplicate accounts"? The conferences I have submitted to, and reviewed for, all had an invitation-like process for potential reviewers.

  • michaelt a day ago

    Many bodies that fund academic work will happily pay for you to fly to a conference and stay at a hotel if you're presenting a paper at the conference - but they'll be a lot less willing if you aren't presenting anything. So a decent % of attendees will be presenting papers.

    And finding reviewers who know their stuff, who'll work for free, and who'll review thoroughly in a short timescale isn't easy.

  • proto-n a day ago

    Not only common, it's become a requirement to review papers if you submit one yourself. Yeah it's not ideal for multiple reasons (what you said + prompt engineering grad students dismissing proper papers without having the slighest idea about the field), but the amount of submissions is so incredibly huge that it's impossible to do it any other way.

    • lqet a day ago

      Then I guess I should be grateful that my academic niche is so small.

  • twic a day ago

    Even if this reviewers weren't allowed to be submitters, if there is more than one conference, or the conference runs for more than one year, the same mechanism can be used.

nicwilson a day ago

Hmmm, I wonder if you could turn this into a sport and have like one paper per year per group of total BS, and shame on the reviewers/conference/journal if they don't catch it, and kudos to the submitters the more blatant it is.

Come to think of it, is there a "Journal of Academic Fraud"?

Peteragain a day ago

I agree with the analysis completely, but the solution is depressing. I keep thinking that publications on arxiv might be a better source of knowledge given the motivation for publishing there is not a contribution to career progression. Keyword search over arxiv papers? But perhaps we should bring back the idea of anonymous publishing:-0

wucke13 a day ago

Tangentially relevant: Gernots list of benchmarking crimes.

https://gernot-heiser.org/benchmarking-crimes.html

  • bjackman a day ago

    Ha, I thought this would be a really useful resource but I think the people the author is complaining about do much better than most benchmarking I see in the industry

    Almost all the benchmarking results I see is just a percentage difference between two algebraic means, no statistical analysis whatsoever.

    Very common interaction: QA folks say "your change degraded some of our metrics and improved some others". I know they are full of shit because it's impossible that my change improved any perf metrics. I ask for statistical details, they don't have any, this meeting was a waste of time, it will be next time too.

    The fact that I get these reactions suggests that everyone else just lets each other get away with it.

    • porridgeraisin a day ago

      Yep. The most recent example that's stuck in my head is actually much worse: they didn't even take the mean! One sample!

      https://github.com/denoland/pm-benchmark

      Check the run bench shell script (there's not much else in the repo anyways)

      • genewitch 17 hours ago

        Hey, that's perfectly valid for arguing with your friend about which one to deploy on our server, all things equal.

        I do this sort of thing to see what tools are faster all the time. ripgrep, ag(silver searcher), grep, MongoDB was one we were arguing about for a while recently.

        • burntsushi 17 hours ago

          Which one won? :)

          (I'm the author of ripgrep.)

          • genewitch 13 hours ago

            ripgrep, except against the full 160GB dataset, mongoDB was faster on my ryzen.

            I have a lot of subtitles. I'm partially hard of hearing and partially i can't stand the way everything is mastered, so i use volume normalization (sometimes called "night mode", vizio calls it this) and subtitles to make up for the fact that the audio tracks in most things is bad.

            Well a side effect of subtitles is now i have context for every video that i can search. grep was grep.

            you didn't think i'd leave you hanging https://i.imgur.com/Vs5AAT7.png

            some other non-statistics from that day: 15GB sorted password list, newline delimited, UTF-8 from spinningrust drive 64 seconds (~234MB/s) to make a copy of the file. ag and rg took 3.2 seconds to search the copy. I'm actually hesitant to state that grep took 52 seconds...

            Thanks for replying, thanks for making me remember the great conversations we had around those topics a couple months ago, and thanks for creating ripgrep, it's my go-to for anything non-trivial!

            • burntsushi 5 hours ago

              Love it! That's awesome. Thank you for replying. :-)

              I've occasionally wanted to put the subtitles from all of my Simpson episodes into an easily searchable format. What do you use to extract subtitles?

jarbus a day ago

I don't have top-tier publications, and I haven't gotten any awards. I've seen people get awards for bullshit and farming prestigous publications. I only have one citation for work that is 100% mine. But every time I read something like this, I feel proud that I don't bullshit, and at least try to do real science. I truly believe in all my work and every sentence I write.

That being said, the insane emphasis on venue is what's pushing me out of academia. I can't compete with people like this.

  • tmaly a day ago

    I am just thinking about what will happen when these papers are used to train LLMs

vladms a day ago

Sometimes it is good to make parallels with other things to check if proposals make sense. How would sound "go and try to steal every car out there, so that car manufacturers improve their car security" ?

For me it sounds counter-productive. I have a feeling that lately (tens of years) many people try to focus on the negatives, rather than the positives. Should we focus on the 3 amazing papers this year, cited by hundreds, that resulted in clear progress or should we complain that 100 papers are useless? Let's focus on 100 because "someone is wrong on the internet".

I did a PhD (so might have more experience) but papers are meant for dissemination. For me everybody that wants to have them "perfect"/"useful" papers imagines a system that does lots of work for them. The system could be improved, but if anything (just throwing an idea) maybe researchers should try to do research in the industry to prove themselves. Then come back after 10 year in academia (maybe with savings) so that they are more independent of "career progression". A lot of research was done (historically, >100 years ago) by rich people, not constrained by a career.

  • anjc a day ago

    > Should we focus on the 3 amazing papers this year, cited by hundreds, that resulted in clear progress or should we complain that 100 papers are useless?

    Agree, and it seems that this is how fields naturally evolve anyway.

userbinator a day ago

(2021)

Together, we can force the community to reckon with its own shortcomings, and develop stronger, better, and more scientific norms. It is a harsh treatment, to be sure – a chemotherapy regimen that risks destroying us entirely. But this is our best shot at destroying the cancer that has infected our community.

And now, nearly 4 years later, research funding gets DOGE'd...

  • pbronez a day ago

    Yup. and MAHA'd. The question is: what comes next? Once the current administration executes their democratic mandate to destroy institutions, how should we rebuild those capabilities?

    In this case, how could the federal government ensure that academic funding flows to researchers and institutions that are doing genuinely high quality research? How can we remove funding from low-quality and fraudulent actors?

throw4847285 a day ago

Unfortunately, I think this article is not pessimistic enough. If it were true that academic fraud is important for establishing the boundaries of respectable scholarship in a field, then behavioral economics wouldn't exist anymore. And evolutionary psychology would be a fraction of its current size.

Trendiness trumps all notions of academic rigor, and as long as a field "feels like" it's on the cutting edge it can go pretty far before collapsing in on itself.

  • daveguy a day ago

    I'm wondering how someone knows the size of the field of evolutionary psychology and what it should be instead of what it is so well that they feel no evidence is necessary. Are you in the field yourself? Or do you have questions about evolution in general? Or none of the above?

    • throw4847285 21 hours ago

      Just cut out everybody who is studying "sexual selection" and I think the field is way healthier. Weird that so many evo psych scholars of a certain persuasion are fixated on that. I have a theory, but it veers into armchair non-evolutionary psychology, and it's not very nice.

    • tovej a day ago

      Evolutionary psychology is highly speculative and is widely known to be full of questionable results.

      So much so that there's a wikipedia page for how bad it is: https://en.m.wikipedia.org/wiki/Criticism_of_evolutionary_ps...

      • throw4847285 21 hours ago

        The SEP page is a slightly more rigorous take, though unlike Wikipedia it is clearly primarily authored by one person, a philosopher of biology who I don't know much about. Still, I think it's a fair and detailed overview of the field, from a methodological perspective.

        https://plato.stanford.edu/entries/evolutionary-psychology/

gumbojuice a day ago

For one of my first papers during PhD, I had a blatant bug in my implementation. Only realized while doing further research building on that previous paper.

I think in any field it's natural to start out naive with an idea that may just be a few steps away from solving something important. Somewhere in the middle only to realize you're not, and then scramble with the ethical dilemma around your work, is it "good enough" or not. I was there anyway.

  • boxed a day ago

    There shouldn't be an ethical dilemma. If you find out your previous paper was wrong, you can publish a new paper saying so. That's 2 papers for the price of one(ish). Everyone wins :P

tonyg a day ago

> Undermining the credibility of computer science research is the best possible outcome for the field, since the institution in its current form does not deserve the credibility that it has.

Horseshit. This might be true for AI research (and even there that's an awfully broad brush you're using, mate), but it's certainly not true for other areas of computer science.

  • CJefferson a day ago

    Is there a lot of good research in computer science, of course.

    Is there even more stuff which really shouldn't be published, and has experiments which are abused to show off how great new technique A is, while hiding that was attempt 72 at making an experiment that showed A was great? Also of course.

  • SirHumphrey a day ago

    Maybe your lab is different (if you work in a research setting), but most of the researchers will readily admit that most of their research output is at least somewhat bull**. It's something that is trained in to people from high school research projects onwards - people judging your results usually do not have time or ability to research you work and even if they have they usually have much more importing things to do than check your mediocre results.

    As a society, we have far too much trust in science, however any time this argument is brought up, we focus on conspiracy theorists who struggle with 100+ years old theories as if the visage of the public trust in science will change their mind, ignoring that any member of the public, who accidentally discovers the Jenga tower the science is built on (but hidden) will become much more likely to believe those charlatans in the future.

    • tensor 21 hours ago

      Excuse me? Based on what? In my time in academia exactly zero researchers would claim that their work is somewhat bullshit.

      As a society, there is laughably little support for science, instead the majority of policy and business decisions are based on fairy tales and snake oil. We need more trust in science.

      • genewitch 17 hours ago

        As of this comment scroll location, there are at least three people that admitted to co-authiring a paper they knew was BS.

        So you don't know anyone, but I can counter with those three anecdotes.

        3>1

        Qed I am right, statistically.

smolder a day ago

It's unfortunate that it's more work and less reward to shut down fraud that to perpetrate it. I think its more of a problem than ever before, even though grifting has always been a thing.

bowsamic a day ago

It's not an exaggeration to say I literally don't know a single person who doesn't engage in this kind of academic fraud to some degree

jillesvangurp a day ago

The academic world is essentially the world's oldest social network. The way academic publishing works is through a convoluted reputation system of academics endorsing each other's works via various publications, giving each other a 'like' by referring work, debating work in public at conferences, hiring each other's students, protege's, etc.

Fraud here basically means faking reputation. There are many ways to do this. And it's common because doing scientific work goes hand in hand with very generous funding. And money corrupts things. So attempts to fake reputation, plagiarize work, artificially boost relevance through low reputable referencing, bribes, etc. are as old as scientific publishing is.

There are a few interesting dynamics here that counter this: high quality publications will want to defend their reputation. E.g. Nature retracting an article tends to be scandalous. They do it to preserve their reputation. And it tends to be bad for the reputation of affected authors. Their reputation is based on them having very high standards and a long history of important people publishing important things that changed the world. Every time they publish something, that's the reputation that is at stake. So, they are strict. And they should be.

The problem is all the second and third rate researchers that make up most of the scientific community. We don't all get to have Einstein level reputations. And things are fiercely competitive at the bottom. And if you have no reputation, sacrificing it is a small price to pay. Also the prestigious publications are guarded by an elitist, highly political, in-crowd. We're talking big money here. And money corrupts. So, this works both ways.

With AI thrown in the mix, the academic world has to up its game. And the tools it is going to have to use are essentially the same used in other social networks. Bluesky, Twitter, etc. have exactly the same problem as scientific publishers; but at a much larger scale. They want to surface reputable stuff and filter out all the AI generated misinformation and it's an arms race.

One solution is using more AI or trying other clever tricks. A simpler solution is tying reputations to digital signatures. Scientific work is not anonymous. You literally stake your reputation by tying your (good) name to a publication and going on the record by "publishing" something. Digital signatures add some strength to that that AIs can't fake or forge. Either you said it and signed it; or you didn't. Easy to verify. And either you are reputable, by having your signature associated with a lot of reputable publications, or you are not. Also easy to verify.

If disreputable stuff gets flagged, you simply scrutinize all the authors and publications involved and let them sort out their reputations by taking appropriate actions (firing people, withdrawing articles, publicly apologizing, etc.). They'll all be eager to restore their reputations so that should be uncontroversial. Or they don't and lose their reputation.

Digital signatures are a severely underused tool currently. We've had access to those for half a century or so.

The challenge isn't technical but institutional. Lots of disreputable people and institutions are currently making a lot of money by operating in the shadows. The tools are there to fix this. But people don't seem to necessarily want to.

ein0p a day ago

What's funny is that if you're well-read in your field, all such bullshit is plainly apparent. You know what the good baseline is, for example, and you know why the author didn't choose it. You know the deficiencies of the benchmarks and see how they were exploited to juice the results. You know their approach is infeasible IRL and can clearly articulate why. Etc, etc. You folks aren't fooling anyone but fools. It's sort of like e.g. a lazy employee thinks their manager doesn't know they're slacking off like mad. As a former manager I can tell you - that is most certainly not the case, and there's no way to hide.

  • Tyr42 a day ago

    This was the major upside to taking a seminar class where we read papers and discuss them. The prof tried to highlight some of these points. Not just seeing the graphs that are there but also the graphs they could have added but didn't.

anjc a day ago

Overly pessimistic, and doesn't acknowledge that heads of steam only build behind promising findings, while the deficient (or 'fraudulent') work die on the vine, published or not. In other words the system tends to work.

Secondly, there are many ingredients required to successfully publish, communicate science, foster collaboration, etc., beyond technical brilliance. I'm sure we all know many technically brilliant people whose career never advanced because they lacked in some necessary area. People shouldn't be discouraged from improving in all areas because OP's delicate genius is offended by their technical ability.

Speaking of discouragement, it's a shame and a disgrace that you publicly called your colleague's work bullshit, including a first author that isn't yourself.

  • bluefirebrand a day ago

    > Overly pessimistic, and doesn't acknowledge that heads of steam only build behind promising findings, while the deficient (or 'fraudulent') work die on the vine, published or not.

    This might be true in hard sciences where a "head of steam" can only build based on real, replicatable results

    But it's very common that public policy is proposed and adopted based on findings from soft sciences like psychology and sociology

    If policy is adopted based on a research paper, I would count that as a "head of steam" being built.

    And if that paper is fraudulent, then we are adopting well-intentioned policy on false pretenses

    • anjc a day ago

      I did just mean AI and Computer Science per OP. By "head of steam" I mean to say that much research is built on it, think the likes of "Attention is All you Need". There isn't quite an equivalent of this in public policy in my experience.

      Conversely, computer science/AI doesn't have an equivalent of the rigor that public policy research tends to go through. CS has e.g., benchmark datasets, typical evaluation metrics, but these are more like norms rather than requirements, whereas in public policy, instruments for validations are far more rigorously tested and enforced. Depending on the area.

      I agree that outright fraud would be detrimental, but I think OP overblows this issue completely and should apologise to his co-authors.