aspenmayer 10 hours ago

HN would benefit from a specific, explicit policy such as this.

  • tptacek 7 hours ago

    We have an explicit policy: you can't post LLM stuff directly to HN.

    https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

    • throwaway314155 7 hours ago

      Doesn't seem very explicit if you have to search a mod's comment history to find it.

      • tptacek 7 hours ago

        Lots of rules on HN work that way. It's a whole thing. We probably don't need to get into it here. I think it works pretty well as a system. We have a jurisprudence!

        • fngjdflmdflg 4 hours ago

          I don't think that is correct. Dang usually links directly to the guidelines and even quotes the exact guidelines being infringed upon sometimes. '"dang" "newsguidelines.html"' returns 20,909 results on algolia.[0] (Granted, not all of these are by Dang himself, I don't think you can search by user on algolia?) Some of the finer points relating to specific guidelines may no be directly written there, eg. what exactly is considered link bait or not etc., but I don't think there are any full blown rules not in the guidelines. I think the reason LLMs haven't been added is because it's a new problem and making a new rule to quickly that may have to change later will just cause more confusion.

          [0] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

          • tptacek 4 hours ago

            No, there are several things like this that aren't explicitly in the guidelines and aren't likely ever to be. We'd get into a very long meta thread talking about what kinds of things land in the guidelines vs. in Dan's "jurisprudence" threads; some other time, maybe.

            • aspenmayer 2 hours ago

              I think it’s okay to have unwritten rules that are inferred. I am not trying to make the perfect the enemy of the good. That said, is HN best served by this status quo? Folks are genuinely arguing against the reasoning for such a rule in the first place, arguing that a rule against LLM-generated content on HN is unenforceable and so is pointless; others are likely unaware any such rule even exists in the first place; you are countering that the rule is fine, but not so fine that we add it to the guidelines.

              I don’t know if this situation benefits from all of these moving parts; perhaps the finer points ought to be nailed down, considering the explicitness of the rule itself in practice.

        • Jerrrry 6 hours ago

          Well said, "Better unsaid."

          Shame is the best moderator.

          Also, HN's miscellaneous audience of rule breakers benefit from having some rules be better off not stated. Especially this one, as it is almost as good as a "Gun-Free Zone"

    • aspenmayer 7 hours ago

      This policy should manifest itself in the Guidelines, if HN users are expected to know about it and adhere to it.

      • Jerrrry 6 hours ago

        Its human users can infer it, the other uses can't, yet.

  • jsheard 8 hours ago

    The community already seems to have established a policy that copy pasting a block of LLM text into a comment will get you downvoted into oblivion immediately.

    • aspenmayer 8 hours ago

      That rubric only works until sufficiently advanced LLM-generated HN posts are indistinguishable from human-generated HN posts.

      It also doesn’t speak to the permission or lack thereof of training LLMs on HN content, which was another main point of OP.

      • JavierFlores09 7 hours ago

        > That rubric only works until sufficiently advanced LLM-generated HN posts are indistinguishable from human-generated HN posts.

        if a comment made by a LLM is indistinguishable from a normal one, it'd be impossible to moderate anyway unless one starts tracking people across comments and see the consistency of their replies and overall stance so I don't particularly think it is useful to worry about people who will go the extra length to go undetected

        • aspenmayer 7 hours ago

          > if a comment made by a LLM is indistinguishable from a normal one, it'd be impossible to moderate anyway unless one starts tracking people across comments and see the consistency of their replies and overall stance so I don't particularly think it is useful to worry about people who will go the extra length to go undetected

          The existence of rule-breakers is not itself an argument against a rules-based order.

        • tredre3 7 hours ago

          HN's guidelines aren't "laws" to be "enforced", they're a list of unwelcome behaviors. There is value in setting expectations for participants in a community, even if some will choose to break them and get away with it.

          • bawolff 5 hours ago

            If comments by LLMs were actually as valuable & insightful as human comments there would be no need for the rule. The rule is in place because they usually aren't.

            Relavent xkcd https://xkcd.com/810/

      • redox99 5 hours ago

        It's pretty trivial to finetune an LLM to output posts that are indistinguishable.

      • majormajor 6 hours ago

        That's assuming a certain outcome: indistinguishable posts.

        Some would say LLM-generated posts will eventually be superior information-wise. In which case possibly the behavior will change naturally.

        Or maybe they don't get there any time soon and stay in the uncanny valley for a long time.

        I'm kinda fine with a "if you can't be bothered to even change the standard-corporate-BS-tone of your copypaste, you get downvoted" - for all I know some people might be more clever with their prompting to get something less crap-sounding, and then they'll just live or die on the coherence of the comment.

    • fenomas 7 hours ago

      Sure, and I think the reason is that whatever else they are, LLM outputs are disposable. Posting them here is like posting outputs from Math.random() - anyone who wants such outputs can easily generate their own.

    • Der_Einzige 7 hours ago

      Bold of you to assume that you will have any idea at all that an LLM generated a particular comment.

      If I take a trick like those recommend by the authors of min_p (high temperature + min_p)[1], I do a great job of escaping the "slop" phrasing that is normally detectable and indicative of an LLM. Even more-so if I use the anti-slop sampler[2].

      LLMs are already more creative than humans are today, they're already better than humans at most kinds of writing, and they are coming to a comment section near you.

      Good luck proving I didn't use an LLM to generate this comment. What if I did? I claim that I might as well have. Maybe I did? :)

      [1] https://openreview.net/forum?id=FBkpCyujtS

      [2] https://github.com/sam-paech/antislop-sampler, https://github.com/sam-paech/antislop-sampler/blob/main/slop...

      • bjourne 6 hours ago

        Fascinating that very minor variations on established sampling techniques still generate papers. :) Afaik, neither top-p nor top-k sampling has conclusively been proven superior to good old-fashioned temperature sampling. Certainly, recent sampling techniques can make the text "sound different", but not necessarily read better. I.e., you're replacing one kind of bot generated "slop" with another.

  • benatkin 9 hours ago

    Nope.

    > LLMs are allowed on Libera.Chat. They may both take input from Libera.Chat and output responses to Libera.Chat.

    This wouldn't help HN.

    Nor would the opposite policy, if only because it would encourage accusatory behavior.

    • aspenmayer 9 hours ago

      I have asked dang to comment on this issue specifically in the context of this post/thread.

      The “opposite policy” is sort of the current status quo, per dang:

      https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

      See this thread for my own reasoning on the issue (as well as dang’s), as it was raised recently:

      https://news.ycombinator.com/item?id=41937993

      You’ll need showdead enabled on your profile to see the whole thread, which speaks to the controversial nature of this issue on HN.

      I agree that your mention of “encouraging accusatory behavior” is a point well-taken, and in the absence of evidence, such accusations themselves would likely run afoul of the Guidelines, but it’s worth noting that dang has said that LLM output itself is generally against the Guidelines, which could lead to a feedback loop of disinterested parties posting LLM content, only to be confronted with interested parties posting uninteresting takedowns of said LLM content and posters of it.

      No easy answers here, I’m afraid.

      • benatkin 9 hours ago

        From the thread with see this thread

        > There are lot of grey areas; for example, your GP comment wasn't just generated—it came with an annotation that you're a lawyer and thought it was sound. That's better than a completely pasted comment. But it was probably still on the wrong side of the line. We want comments that the commenters actually write, as part of curious human conversation.

        This doesn't leave much room for AI non-slop:

        > We want comments that the commenters actually write, as part of curious human conversation.

        I think HN is trying to be good at being HN, not just to provide the most utility to its users in general. So those wanting something like HN if it started in 2030, may want to try and build a new site.

      • refulgentis 9 hours ago

        Law is hard!

        In general, the de facto status quo is:

        1. For whatever reason*, large swaths of LLM output copy-pasted is easily detectable.

        2. If you're restrained, polite, with an accurate signal on this, you can indicate you see this, and won't get downvoted heavily. (ex. I'll post "my internal GPT detector went off, [1-2 sentence clipped version of why I think its wrong even if it wasn't GPT]")

        3. People tend to downvote said content, as an ersatz vote.

        In general, I don't think there needs to be a blanket ban against it, in the sense of I have absolutely no problem with LLM output per se, just lazy invocation of it, ex. large entry-level arguments that were copy-pasted.

        i.e. I've used an LLM to sharpen my already-written rushed poor example, which didn't result in low-perplexity, standard-essay-formatted, content.

        Additionally, IMHO it's not bad, per se, if someone invests in replying to an LLM. The fact they are replying indicates its an argument worth furthering with their own contribution.

        * a strong indicator that a fundamental goal other than perplexity minimization may increase perceived quality

        • og_kalu 7 hours ago

          The reason is not strange or unknown. The text completion GPT-3 from 2020 often sounds more natural than 4. The reason is the post training processes. Models are more or less being trained to sound like that during RLHF. Stilted, robotic, like a good little assistant. Open AI, Anthropic have said as much. It's not a limitation of the loss function or even state of the art.

        • aspenmayer 7 hours ago

          To me, the essence of online discussion boards is a mutual exchange of ideas, thoughts, and opinions via a shared context, all in service of a common goal of a meeting of minds. When one party uses LLMs, it undermines the unspoken agreement to post “authentic” content as opposed to “unauthentic” content. Authenticity in this context is not just a “nice to have,” but is part and parcel to the entire enterprise of participating in a shared experience and understanding via knowledge transfer and cross-cultural exchange.

          I can see that you care enough to comment here in a “genuine” and good faith manner, as I recognize your username and your posting output as being in good faith. That being said, an increase in LLM-generated content on HN generally is likely to result in an associated increase in the number of bad actors using LLMs to advance their own ends. I don’t want to give bad actors any quarter, whether that be wiggle room or excuses about Guidelines or on-topic-ness, or any other justification for why self-proclaimed “good” actors think that using LLMs is okay when they do it, but not when bad actors do it, because doing so gives cover to bad actors to do so, as long as they don’t get caught.

          • refulgentis 7 hours ago

            > That being said, an increase in LLM-generated content on HN generally is likely to result in an associated increase in the number of bad actors using LLMs to advance their own ends.

            This hit me like a ton of bricks, very true.

            The older I get the more I understand the optimist in me rushes to volunteer good things that'll happen over the obvious bad.

            This, in retrospect, will apply here too and is explanatory for some notably bad vibes I've had here the past year or two. (been here 15 years)

        • vunderba 6 hours ago

          Additionally, IMHO it's not bad, per se, if someone invests in replying to an LLM. The fact they are replying indicates its an argument worth furthering with their own contribution

          And once those floodgates are open, what exactly makes you think that they're not just also using an LLM to generate their "contribution"?

          • refulgentis 4 hours ago

            Not necessarily bad either! Thats what the downvote button is for :)

    • t-writescode 9 hours ago

      The odds of LLMs being used to produce content on HN is a number approaching 100%.

      The odds of LLMs being trained / queried against data scraped from HN or HNSearch is even closer to 100%.

      I know you don't like the "LLMs are allowed..." part, but they're here and they literally cannot be gotten rid of. However, this rule,

      > As soon as possible, people should be made aware if they are interacting with, or their activity is being seen by, a LLM. Consider using line prefixes, channel topics, or channel entry messages.

      Could be something that is strongly encouraged and helpful, and possibly the "good" LLM users would follow it.

fjdjshsh 5 hours ago

I strongly believe it should be illegal to post something automatically by an LLM without clearly identifying it as such. I hope countries start passing these laws soon

  • conception 5 hours ago

    Why pass a law that’s completely unenforceable?

    • theamk 5 hours ago

      It is somewhat enforceable.

      Sure, no one is going to go after random reddit post, but if a Major Newspaper wants to have AI write their articles, this would have to be labeled. And if your bank gets LLM support agent, it can no longer pretend to be human. All very desireable outcomes IMHO.

    • karlgkk 4 hours ago

      It's not unenforceable at all. Major players would be forced to abide by it, smaller players would reduce their use of LLMs, and not marking LLM content would be a bannable offense on most platforms.

      • conception 4 hours ago

        Open source Local LLMs are already a thing. That pandora’s box is way way open already.

        • Jensson an hour ago

          If a big company does that they wouldn't be able to hide it, this is just as easy to enforce as any other regulation.

        • Grimblewald an hour ago

          black market guns are also a thing, and they're realtivly easy to manufacture using untracked machines and materials using skills you can develop in under a year. That doesn't mean that regulating the sale / supply / ownership of guns isn't useful.

bawolff 5 hours ago

As far as i can tell, this policy is essentially - don't do anything with an llm that would get you banned if you did it manually as a human.

superkuh 10 hours ago

Mostly it's just formalizing of the established status quo. But the changes re: allowing training on chat logs has caused some unintended consequences.

For one, now the classic IRC megahal bots which have been around for decades are technically not allowed unless you get permission from Libera staff (and the channel ops). They are markov chains that continuously train on channel contents as they operate.

But hopefully, as in the past, the Libera staffers will intelligently enforce the spirit of the rules and avoid any silly situations like the above caused by imprecise language.

  • comex 9 hours ago

    By its wording, the policy is specifically about training LLMs. A classic Markov chain may be a language model, but it’s not a large language model. The same rules might not apply.

    • superkuh 9 hours ago

      Yeah, you'd think, but this one was run by the staff in #libera the other night after the announcement and it sounded like they believed markovs technically counted. But I imagine as long as no one is rocking the boat they'll be left alone. Perhaps there was some misunderstanding on my part.

  • martin-t 7 hours ago

    A classic example of a community self regulating until overwhelmed at which point rules are imposed which bad previously accepted and harmless behavior.

    Rules must take scale into account and do it explicitly to avoid selective enforcement.

    There's a difference between one person writing a simple bot and a large corporation offering a bot pretending to be human to everyone. The first is harmless and fun, the second is a large scale for-profit behavior with proportionally large negative externalities.

ranger_danger 5 hours ago

Now can libera please establish etiquette for channel mods? All the biggest channels have extremely toxic, egotistical mods with god complexes visible from space.

  • aspenmayer an hour ago

    Have you seen examples of such codes of conduct in the IRC context before? Closest thing I can think of maybe is SDF’s or other shared systems’, but such rules seem somewhat quaint compared to norms on IRC.

    Speaking of SDF, here’s their bot policy:

    https://sdf.org/?faq?CHAT?01

    > [01] CAN I RUN AN IRC BOT HERE??

    > IRC BOTs are pretty intensive and most systems and networks ban them.

    > In an experiment conducted in 1996 on this system, we allowed users to compile and run their bots. The result was hundreds of megs of disk space became occupied because each user insisted on having their own version of eggdrop uncompressed and untarred in their home directory. All physical memory was in use as ~45 eggdrop processes were running concurrently. The system was basically USELESS and it took 1.5 hours to login if you were patient enough (even from the system console).

    > The ARPA members called a vote on the issue and the result was almost a resounding unanimous NO.

    > However, there are times when running a bot is useful, for instance keeping a channel open, providing information or just logging a channel. Basically the bot policy here is a bit relaxed for MetaARPA members. Common sense is the rule. As long as you aren't running a harmful process, such as a hijack bot, warez bot or connecting to a server that does not allow bots, then you may run a bot process.

    More info about SDF for those who are curious:

    https://en.wikipedia.org/wiki/SDF_Public_Access_Unix_System