Lerc 3 hours ago

Part of me thinks that if the case against social media was stronger, it would not be being litigated on substack.

A lot of things suck right now. Social media definitely give us the ability to see that. Using your personal ideology to link correlations is not the same thing as finding causation.

There will be undoubtedly be some damaging aspects of social media, simply because it is large and complex. It would be highly unlikely that all those factors always aligned in the direction of good.

All too often a collection of cherry picked studies are presented in books targeting the worried public. It can build a public opinion that is at odds with the data. Some people write books just to express their ideas. Others like Jonathan Haidt seem to think that putting their efforts into convincing as many people as possible of their ideology is preferable to putting effort into demonstrating that their ideas are true. There is this growing notion that perception is reality, convince enough people and it is true.

I am prepared to accept aspects of social media are bad. Clearly identify why and how and perhaps we can make progress addressing each thing. Declaring it's all bad acts as a deterrent to removing faults. I become very sceptical when many disparate threads of the same thing seem to coincidentally turn out to be bad. That suggests either there is an underlying reason that has been left unstated and unproven or the information I have been presented with is selective.

  • Llamamoe 3 hours ago

    I feel like regardless of all else, the fact of algorithmic curation is going to be bad, especially when it's contaminated by corporate and/or political interests.

    We have evolved to parse information as if its prevalence is controlled by how much people talk about it, how acceptable opinions are to voice, how others react to them. Algorithmic social media intrinsically destroy that. They change how information spreads, but not how we parse its spread.

    It's parasocial at best, and very possibly far worse at worst.

    • armchairhacker 42 minutes ago

      No doubt the specific algorithms used by social media companies are bad. But what is "non-algorithmic" curation?

      Chronological order: promotes spam, which will be mostly paid actors. Manual curation by "high-quality, trusted" curators: who are they, and how will they find content? Curation by friends and locals: this is probably an improvement over what we have now, but it's still dominated by friends and locals who are more outspoken and charismatic; moreover, it's hard to maintain, because curious people will try going outside their community, especially those who are outcasts.

      EDIT: Also, studies have shown people focus more on negative (https://en.wikipedia.org/wiki/Negativity_bias) and sensational (https://en.wikipedia.org/wiki/Salience_(neuroscience)#Salien...) things (and thus post/upvote/view them more), so an algorithm that doesn't explicitly push negativity and sensationalism may appear to.

      • rightbyte 34 minutes ago

        > Chronological order: promotes spam, which will be mostly paid actors.

        If users chose who to follow this is hardly a problem. Also classical forums dealt with spam just fine.

        • armchairhacker 25 minutes ago

          How will users choose who to follow? This was a real problem when I tried Mastodon/Lemmy/Bluesky, I saw lots of chronological posts but none of them were interesting.

          Unfortunately, classical forums may have dealt with spam better because there were less people online back then. Classical forums that exist today have mitigations and/or are overrun with spam.

        • squigz 16 minutes ago

          > Also classical forums dealt with spam just fine.

          Err... well, no, it was always a big problem, still is, and is made even more so by the technology of our day.

    • Lerc 2 hours ago

      I have wondered if it's not algorithmic curation per-se that is the problem, but personalised algorithmic curation.

      When each person is receiving a personalised feed, there is a significant loss of common experience. You are not seeing what others are seeing and that creates a loss of a basis of communication.

      I have considered the possibility that the solution might be to enable many areas of curation but in each domain the thing people see is the same for everyone. In essence, subreddits. The problem then becomes the nature of the curators, subreddits show that human curators are also not ideal. Is there an opportunity for public algorithm curation. You subscribe to the algorithm itself and see the same thing as everyone else who subscribes sees. The curation is neutral (but will be subject to gaming, the fight against bad actors will be perpetual in all areas).

      I agree about the tendency for the prevalence of conversation to influence individuals, but I think it can be resisted. I don't think humans live their lives controlled by their base instincts, most learn to find a better way. It is part of why I do not like the idea of de-platforming. I found it quite instructional when Jon Stewart did an in-depth piece on trans issues. It made an extremely good argument, but it infuriated me to see a few days later so many people talking about how great it was because Jon agreed with them and he reaches so many people. They completely missed the point. The reason it was good is because it made a good case. This cynical "It's good if it reaches the conclusion we want and lots of people" is what is destroying us. Once you feel like it is not necessary to make your case, but just shout the loudest, you lose the ability to win over people who disagree because they don't like you shouting and you haven't made your case.

  • majormajor 3 hours ago

    It's increasingly discussed in traditional media too so let's toss out that first line glib dismissal.

    More and more people declaring it's net-negative is the first step towards changing anything. Academic "let's evaluate each individual point about it on its own merits" is not how this sort of thing finds political momentum.

    (Or we could argue that "social media" in the Facebook-era sense is just one part of a larger entity, "the internet," that we're singling out.)

    • Lerc an hour ago

      I did not consider it a glib dismissal, and I would not consider traditional media an appropriate avenue to litigate this either. Trial by media is a term used to describe something that generally think shouldn't occur.

      The appropriate place to find out what is and isn't true is research. Do research, write papers, discuss results, resolve contradictions in findings, reach consensus.

      The media should not be deciding what is true, they should be reporting what they see. Importantly they should make clear that the existence of a thing is not the same thing as the prevalence of a thing.

      >Academic "let's evaluate each individual point about it on its own merits" is not how this sort of thing finds political momentum.

      I think much of my post was in effect saying that a good deal of the problem is the belief that building political momentum is more important than accuracy.

    • delusional 2 hours ago

      > More and more people declaring it's net-negative is the first step towards changing anything.

      I accept that "net-negative" is a cultural shorthand, but I really wish we could go beyond it. I don't think people are suddenly looking at both sides of the equation and evaluating rationally that their social media interactions are net negative.

      I think what's happening is a change in the novelty of social media. That is, the the net value is changing. Originally, social media was fun and novel, but once that novelty wears away it's flat and lifeless. It's sort of abstractly interesting to discuss tech with likeminded people on HN, but once we get past the novelty, I don't know any of you. Behind the screen-names is a sea of un-identifiable faces that I have to assume are like-minded to have any interesting discussions with, but which are most certainly not like me at all. Its endless discussions with people who don't care.

      I think that's what you're seeing. A society caught up in the novelty, losing that naive enjoyment. Not a realization of met effects.

    • logicchains 2 hours ago

      >It's increasingly discussed in traditional media too so let's toss out that first line glib dismissal.

      Traditional media is the absolute worst possible source for anything related to social media because of the extreme conflict of interest. Decentralised media is a fundamental threat to the business model of centralised media, so of course most of the coverage of social media in traditional media will be negative.

      • alisonatwork 2 hours ago

        Unfortunately most of what people understand as "social media" is not decentralized, and most of the biggest names on Substack in particular come directly out of "traditional media", which is exactly why it's not a real alternative. Substack is just another newspaper except now readers have to pay for every section they want to read.

        • bluebarbet an hour ago

          The difference between traditional and social media is not just technical. Traditional media hosts a profession (journalism) with a code of ethics, editorial oversight, minimal standards, a mission of truth-seeking. It's easy to be cynical but those things have generally served us well. The Substack jungle is not a good replacement.

          • ivewonyoung 38 minutes ago

            > Traditional media hosts a profession (journalism) with a code of ethics, editorial oversight, minimal standards, a mission of truth-seeking

            Which traditional media outlets follow those things nowadays? Genuine question, looking for information and news to consume.

      • Theodores 2 hours ago

        I wish to quibble with you on this as there is a love/hate relationship between the conventional media and social media.

        The mainstream media have several sources, including the press releases that get sent to them, the newswires they get their main news from and social media.

        In the UK the press, in particular, the BBC, were early adopters of Twitter. Most of the population would not have heard of it had it not been for the journalists at the BBC. The journalists thought it was the best thing since the invention of the printing press. Latterly Instagram has become an equally useful source to them and, since Twitter became X, there is less copying and pasting tweets.

        The current U.S. President seems capable of dictatorship via social media, so following his messages on social media is what the press do. I doubt any journalist has been on whitehouse.gov for a long time, the regular web and regular sources have been demoted.

    • krapp 3 hours ago

      "net-negative" sounds like a rigidly defined mathematically derived result but it's basically just a vibe that means "I hate social media more than I like it."

      • sedawkgrep 2 hours ago

        I'm struggling to understand your point, especially since the conclusion you posit is rather glib and dismissive.

        Net-negative is not quantifiable. But it is definitely qualifiable.

        I don't think you have to think of things in terms of "hate it more than I like it" when you have actual examples on social media of children posting self-harm and suicide, hooliganism and outright crimes posted for viewership, blatant misinformation proliferation, and the unbelievable broad and deep affect powerful entities can have on public information/opinion through SM.

        I think we can agree all of these are bad, and a net-negative, without needing any mathematic rigor.

        • krapp 2 hours ago

          My point is that "More and more people declaring social media net-negative" doesn't mean anything, and it certainly isn't a valid "first step towards changing anything" because it isn't actionable.

          >I don't think you have to think of things in terms of "hate it more than I like it" when you have actual examples on social media of children posting self-harm and suicide, hooliganism and outright crimes posted for viewership, blatant misinformation proliferation, and the unbelievable broad and deep affect powerful entities can have on public information/opinion through SM.

          Sure, and then there's plenty of children not posting self-harm and suicide, hooliganism and outright crimes posted for viewership, and plenty of information and perfectly normal, non-harmful communication and interaction. "net-negative" implies there is far more harmful content than non-harmful, and that most people using social media are using it in a negative way, which seems more like a bias than anything proven. I can agree that there are harmful and negative aspects of social media without agreeing that the majority of social media content and usage is harmful and negative.

  • nathan_compton 41 minutes ago

    All this is good except that to achieve any kind of actual political action in this actual universe in which we live, we must use rhetoric. Asking people to be purely rational is asking them to fail to change anything about the way our culture works.

  • solid_fuel 2 hours ago

    There a lot of money in social media, literally hundreds of billions of dollars. I expect the case against it will continue to grow, like the case against cigarettes did.

    I will say this, and this is anecdotal, but other events this week have been an excellent case study in how fast misinformation (charitably) and lies (uncharitably) spread across social media, and how much social media does to amp up the anger and tone of people. When I open Twitter, or Facebook, or Instagram, or any of the smaller networks I see people baying for blood. Quite literally. But when I talk to my friends, or look at how people are acting in the street, I don't see that. I don't see the absolute frenzy that I see online.

    If social media turns up the anger that much, I don't think it's worth the cost.

    • Lerc 2 hours ago

      >There a lot of money in social media, literally hundreds of billions of dollars. I expect the case against it will continue to grow, like the case against cigarettes did.

      I don't think it follows that something making money must do so by being harmful. I do think strong regulation should exist to prevent businesses from introducing harmful behaviours to maximise profits, but to justify that opinion I have to believe that there is an ability to be profitable and ethical simultaneously.

      >events this week have been an excellent case study in how fast misinformation (charitably) and lies (uncharitably) spread across social media

      On the other hand The WSJ, Guardian, and other media outlets have published incorrect information on the same events. The primary method that people had to discover that this information was incorrect was social media. It's true that there was incorrect information and misinformation on social media, but it was also immediately challenged. That does create a source of conflict, but I don't think the solution is to accept falsehoods unchallenged.

      If anything education is required to teach people to discuss opposing views without rising to anger or personal attacks.

      • solid_fuel 2 hours ago

        > I don't think it follows that something making money must do so by being harmful.

        My point isn't that it's automatically harmful, simply that there is a very strong incentive to protect the revenue. That makes it daunting to study these harms.

        > On the other hand The WSJ, Guardian, and other media outlets have published incorrect information on the same events. The primary method that people had to discover that this information was incorrect was social media.

        I agree with your point here too, and I don't think the solution is to completely stop or get rid of social media. But, the problem I see is there are tons of corners of social media where you can still see the original lies being repeated as if they are fact. In some spaces they get challenged, but in others they are echoed and repeated uncritically. That is what concerns me - long debunked rumors and lies that get repeated because they feel good.

        > If anything education is required to teach people to discuss opposing views without rising to anger or personal attacks.

        I think many people are actually capable of discussing opposing views without it becoming so inflammatory... in person. But algorithmic amplification online works against that and the strongest, loudest, quickest view tends to win in the attention landscape.

        My concern is that social media is lowering people's ability to discuss things calmly, because instead of a discussion amongst acquaintances everything is an argument is against strangers. And that creates a dynamic where people who come to argue are not arguing against just you, but against every position they think you hold. We presort our opponents into categories based on perceived allegiance and then attack the entire image, instead of debating the actual person.

        But I don't know if that can fixed behaviorally, because the challenge of social media is that the crowd is effectively infinite. The same arguments get repeated thousands of times, and there's not even a guarantee that the person you are arguing against is a real person and not just a paid employee, or a bot. That frustration builds into a froth because the debate never moves, it just repeats.

        • Lerc an hour ago

          >My point isn't that it's automatically harmful, simply that there is a very strong incentive to protect the revenue. That makes it daunting to study these harms.

          The problem is that having an incentive to hide harms is being used as evidence for the harm, whether it exists or not.

          Surely the same argument could be applied that companies would be incentivised to make a product that was non-harmful over one that was harmful. Harming your users seems counterproductive at least to some extent. I don't think it is a given that a harmful approach is the most profitable.

          • solid_fuel an hour ago

            > The problem is that having an incentive to hide harms is being used as evidence for the harm, whether it exists or not.

            No, the incentive to hide harm is being given as a reason that studies into harm would be suppressed, not as evidence of harm in and of itself. This is a direct response to your original remark that "Part of me thinks that if the case against social media was stronger, it would not be being litigated on substack."

            Potential mechanisms and dynamics that cause harm are in the rest of my comment.

            > Harming your users seems counterproductive at least to some extent.

            Short term gains always take precedence. Cigarette companies knew about the harm of cigarettes and hid it for literally decades. [0] Fossil fuel companies have known about the danger of climate change for 100 years and hid it. [1]

            If you dig through history there are hundreds of examples of companies knowingly harming their users, and continuing to do so until they were forced to stop or went out of business. Look at the Sacklers and the opioid epidemic [2], hell, look at Radithor. [3] It is profitable to harm your users, as long as you get their money before they die.

            [0] https://academic.oup.com/ntr/article-abstract/14/1/79/104820... [1] https://news.harvard.edu/gazette/story/2021/09/oil-companies... [2] https://en.wikipedia.org/wiki/Sackler_family [3] https://en.wikipedia.org/wiki/Radithor

            • Lerc 43 minutes ago

              >No, the incentive to hide harm is being given as a reason that studies into harm would be suppressed, not as evidence of harm in and of itself. This is a direct response to your original remark that "Part of me thinks that if the case against social media was stronger, it would not be being litigated on substack."

              That seems like a fair argument. I don't think it means that it grants opinions the weight of truth. I think it would make it fair to identify and criticise suppression of research and advocate for a mechanism by which such research can be conducted. An approach that I would support in this area was a tax or levy on companies with large numbers of users that could be ear-marked for funding independent research regarding the welfare of their user base and on society as a whole.

              >Short term gains always take precedence.

              That seems a far worthier problem to address.

              >If you dig through history there are hundreds of examples of companies knowingly harming their users

              I don't deny that these things exist, I simply believe that it is not inevitable.

  • logicchains 2 hours ago

    There's a concerted assault on social media from the powers that be because social media is essentially decentralised media, much harder for authoritarians to shape and control than centralised media. Social media is why the masses have finally risen up in opposition to what Israel's been doing in Gaza, even though the genocide has been going on for over half a century: decentralised information transmission allowed people to see the reality of what's really going on there.

blitz_skull 2 hours ago

The last week has taken me from “I believe in the freedom of online anonymity” to “Online anonymity possess a weight that a moral, civil society cannot bear.”

I do not believe humans are capable of responsibly wielding the power to anonymously connect with millions of people without the real weight of social consequence.

  • jacobedawson 2 hours ago

    The strongest counterpoint to that is the intense chilling effect that zero anonymity would have on political dissent and discourse that doesn't match the status quo or party line. I feel that would be much more dangerous for our society than occasionally suffering the consequence of some radicalized edge cases.

    • slg 2 hours ago

      In that instance, the anonymity is treating the symptom and not the root cause of the problem you fear. The actual problem is a society that does not tolerate dissent.

      • NoahZuniga 2 hours ago

        You might live in an extremely free country and have no fear about political prosecution but still fear social prosecution.

        If someone I was friends with made racist remarks, they wouldn't be prosecuted for that. But I would stop being their friend. Similarly if I was the only one in my friend group against racism and advocate firefly against it, they would probably stop being my friends.

        • slg an hour ago

          >If someone I was friends with made racist remarks, they wouldn't be prosecuted for that. But I would stop being their friend.

          So you want your friend to be able to anonymously express their racism while being able to hide it from you? I can't imagine advocating for that as a desired goal rather than a negative side effect.

          >Similarly if I was the only one in my friend group against racism and advocate firefly against it, they would probably stop being my friends.

          If we are talking about a society level problem, I think it is a little silly to think a society as toxic as this hypothetical one could be saved by anonymous internet posting.

          For the record, I'm not as against anonymous posting as the person who started this specific comment thread, I just think this line of argument is advocating for a band-aid over bigger issues.

          • NoahZuniga 21 minutes ago

            These were just extreme examples to indicate that there can be social repercussions to dissenting.

            Maybe a more convincing example is that if I advocate for making it easier to build housing because that will lower the cost of housing and many of my friends are homeowners, they might really not like me because lowering the cost of housing directly lowers their net worth.

            Are these people evil for not wanting to lose their retirement savings (wrapped up in their home)?

            Edit: also

            > So you want your friend to be able to anonymously express their racism while being able to hide it from you?

            While on the specific example of racism I'm pretty convinced of my moral correctness, I am not bold enough to declare that every bit of my worldview is the universally correct one. I am also not so bold to say that I will always be instantly convinced of my incorrectnes by a friend challenging my worldview (if they actually do have a better stance on some thing). My conclusion is that my friend should have some place to platform his better opinion without (having to fear) alienating me. And the only way to achieve this as far as I know is anonymous platforms.

          • foxglacier 42 minutes ago

            I live in a society as toxic as that. It's New Zealand. One of the minor parties currently in government aims to undo systemic racism. However, the popular opinion is that they are the racists because of that. I don't dare tell people that I voted for them because I'll be judged as a racist by some of my family members and loose friends. If I say it on the local internet groups, others will be hostile to me for it. Anonymity helps people to speak up about these issues.

            How do we solve those bigger issues when we live in an emperor's new clothes society? Wait for children who haven't learnt the rules to point them out?

      • Spivak 2 hours ago

        I think we should operate on the premise that no society in the history of humanity has tolerated dissent and none ever will. So treating the symptom is all we can do. It's the basis of why privacy is necessary in any respect.

        The rational tolerant society you imagine is so far fetched we don't even pretend it can exist even in fantasies.

    • phendrenad2 an hour ago

      Well, perhaps people should think twice before stirring the pot. Maybe the incentive to get your 20 seconds of fame by making some snappy comment on a public figure's post is part of what's driving incivility online.

      • nathan_compton 39 minutes ago

        I actually don't think incivility per se is the problem. The problem is that social media encourages us to be inauthentic because we all subconsciously cater to the gaze, both courting its attention and terrified of it at the same time. This is way worse than people being rude.

    • Barrin92 8 minutes ago

      >the intense chilling effect that zero anonymity would have on political dissent

      Chilling the discourse would be a feature, not a bug. In fact what discourse in most places these days needs is a reduction in temperature.

      This kind of defence of anonymity is grounded in the anthropologically questionable assumption that when you are anonymous you are "who you really are" and when you face consequences for what you say you don't. But the reality is, we're socialized beings and anonymity tends to turn people into mini-sociopaths. I have many times, in particular when I was younger said things online behind anonymity that were stupid, incorrect, more callous, more immoral than I would have ever face-to-face.

      And that's not because that's what I really believed in any meaningful sense, it's because you often destroy any natural inhibition to behave like a well-adjusted human through anonymity and a screen. In fact even just the screen is enough when you look at what people post with their name attached, only to be fired the next day.

  • Longlius 2 hours ago

    Anonymity has no real impact on this. People post heinous things under their full legal names just as readily.

    I'd argue if all it took was people saying some mean things anonymously to change your opinion, then your convictions weren't very strong to begin with.

    • ks2048 2 hours ago

      > People post heinous things under their full legal names just as readily.

      I disagree with "just as readily" (i.e. most of the most heinous things are indeed bots or trolls).

      Also, I imagine that without the huge amount of bots and anonymous trolls, the real-name-accounts would not post as they do now - both because their opinions are shaped by the bots AND because the bots give them the sense that many more people agree with them.

    • numpad0 an hour ago

      IMO it's a bit of mental gymnastics to think that anonymity has to do with this, when extremist narratives always come attached with a memorable full name and a face.

    • add-sub-mul-div an hour ago

      You're right. It's the weakest who are the most susceptible to demagoguery.

  • rkomorn 2 hours ago

    They're unfortunately not much more capable of responsibly connecting with people non-anonymously, I'd say.

    See examples like finding someone's employer on LinkedIn to "out" the employee's objectionable behavior, doxxing, or to the extreme, SWATing, etc.

    • qarl 2 hours ago

      Yeah. People use their real identities on Facebook, and it doesn't help a bit.

      • ks2048 an hour ago

        > it doesn't help a bit.

        I would replace "it doesn't help a bit" with "it doesn't solve the problem". My casual browsing experience is that X is much more intense / extreme than Facebook.

        Of course, the bigger problem is the algorithm - if the extreme is always pushed to the top, then it doesn't matter if it's 1% or 0.001% - the a big enough pool, you only see extremes.

  • boplicity 37 minutes ago

    Plenty of people are perfectly willing to be publicly despicable online in their social media accounts, using their real names. Pretty easy to find them.

    The problem is the leaders of the large social media organizations do not care about the consequences of their platforms enough to change how they operate. They're fine with hosting extremist and offensive content, and allowing extremists to build large followings using their platforms. Heck, they even encourage it!

  • cramsession 2 hours ago

    Why is that? Some irony as well that you're posting anonymously. Are you comfortable giving us your identification right now?

  • XorNot 2 hours ago

    What a bizarre conclusion given the multiple high profile individuals and politicians who overtly and directly called for violent oppression and civil war against their political enemies on the last week.

  • analognoise 2 hours ago

    We don’t have a moral or civil society anyway; we can’t even prosecute Trumps numerous illegal actions (even when convicted!). Can’t get the Epstein files. Can’t even point out Charlie Kirk was not a great person (while politicians said nothing about the school shooting the same day), and where it’s legal to kill 40,000 of us a year due to poor medical coverage so we can prop up the stock.

    I’m not sure, given the moral dystopia we currently inhabit, what positive benefit would accrue from removing online anonymity?

isodev 4 hours ago

I think to be clear that’s “The case against algorithmic*” social media”, the kind that uses engagement as a core driver.

stack_framer 26 minutes ago

I did my own informal research study—I quit social media cold turkey. My findings: I feel much better. I don't need any other data.

mallowdram 38 minutes ago

The missing link to our epistemic collapse is language. The acceleration of language, which is arbitrary, accelerates language distortion. The contagion on social media is merely a symptom of the disease of language.

“Historical language records reveal a surge of cognitive distortions in recent decades” https://www.pnas.org/doi/10.1073/pnas.2102061118

xnx 2 hours ago

Social media would be entirely different if there were no monetization on political content. There's a whole lot of ragebaiting/engagement-farming for views. I don't know how to filter for political content, but it's worth a shot. People are free to say whatever they want, but they don't need to get paid for it.

  • stevage 2 hours ago

    Strangely I never see political content on YouTube. Maybe the algorithm worked out quickly I'm simply not interested. Whereas twitter/mastodon/bluesky are awash in it, to the point of making those platforms pretty unusable for me.

    I guess the difference is that YouTube content creators don't casually drop politics in because it will alienate half their audience and lose revenue. Whereas on those other platforms the people I follow aren't doing it professionally and just share whatever they feel like sharing.

    • timeon 43 minutes ago

      Interesting, I do not see politics on Mastodon, while YouTube recommends me not just random politics, but conspiracy theories about politics.

      On Mastodon, those I follow do not post about politics and if they do it is hidden behind content warning.

      YouTube is probably location based as I have no account there and that type of content is relatively mainstream where I live.

  • ants_everywhere an hour ago

    they get paid in political power that's why it's so ragebait driven

1970-01-01 31 minutes ago

You reap what you sew. Stupid and uninformed voices receiving equivalent status to wise scientific experts was a mistake. Witnessing the flat Earth crowd growing over the decades encapsulates everything wrong with social media.

_wire_ 5 hours ago

These question-begging, click-bait something-is-something-other-than-you-think posts are something less entertaining than the poster thinks.

  • abnercoimbre 4 hours ago

    Yup. Soon as I read:

    > I am going to focus on the putative political impacts of social media

    I closed the tab.

    • IshKebab 3 hours ago

      Yeah I closed it when I saw the size of the scroll bar. If you need 100k words to make your point write a book.

      • stevage 2 hours ago

        Huh, I often have the reverse sentiment with a lot of books: this should have been a blog post. There's often a good intro which lays out the thesis, but each chapter is way too long, spelling out details that are obvious or superfluous.

  • greyadept an hour ago

    The author could have made the same points without using words like “polemicizing”, “putative”, and “epistemic”.

gerdesj 20 minutes ago

"In conclusion: " "...in particular in the U.S., but probably across Europe as well. ..."

The world is rather larger than the US and Europe. I physically endure myopia and frankly Mr Witkin seems to figuratively suffer from it.

I need only mention the name: TikTok.

homeonthemtn an hour ago

Social media is a cancer on our society. It is both the asbestos and cigarettes of our generation.

  • infotainment 9 minutes ago

    Agreed, and I feel like the right answer might be to treat it exactly like cigarettes. For example:

    1. Ban in most places except very specific ones. E.g., "would you like to sit in the social media use section today?"

    2. Make it extremely expensive to access and use. This would likely do wonders to cut down on use, just as it did for cigarettes.

alexfromapex 4 hours ago

My main case against at this point is that everything you post will be accessible by "bad" AI

cramsession an hour ago

Without social media, we'd be left with mainstream media, which is a very narrow set of channels that those in power can control. Despite rampant censorship on social media, it's still the best way to circumvent propaganda and give people a voice.

  • sethammons an hour ago

    > it's still the best way to circumvent propaganda and give people a voice.

    I think it can amplify propaganda but still give people a voice, which is better than no voice I think

  • nicce an hour ago

    Without social media, people would go out and talk face-to-face or even arrange meetings, like before social media.

    • cramsession 19 minutes ago

      That's not media, it's communication with people you know.

  • n1b0m an hour ago

    Its still propaganda just from Russian and Chinese bots.

    • cramsession 18 minutes ago

      The vast majority of bots are from Israel.

  • add-sub-mul-div an hour ago

    The idea of social media reducing net propaganda is a wild take.

    • cramsession 17 minutes ago

      We would have no idea what was going on in Gaza if it wasn't for social media. It really exposed how biased (which probably isn't even a strong enough word) our msm is.

jparishy 2 hours ago

We, consumers online, are sliced and diced on every single dimension possible in order to optimize our clicks for another penny.

As a side benefit, when you do this enough, the pendulum that goes over the middle line for any of these arbitrary-but-improves-clicks division builds momentum until it hits the extremes. On either side-- it doesn't matter, cause it will swing back just as hard, again and again.

As a side benefit the back and forth of the pendulum is very distracting to the public so we do not pay attention to who is pushing it. Billions of collective hours spent fighting with no progress except for the wallets of rich ppl.

It almost feels like a conspiracy but I think it's just the direct, natural result of the vice driven economy we have these days

hbarka an hour ago

Full anonymity in social media should not be allowed. It becomes a cover for bad actors (propagandists, agents, disinformation, bots, age-inappropriate, etc.) It doesn’t have to be a full identity, but knowing your user metadata is open during interactions can instill a sense of responsibility and consequence of social action. As in real life.

  • idle_zealot an hour ago

    Looking at any random fullrealname Facebook account will disabuse you of this notion. People will tie vile shit to their identities without a second thought.

    Rather than sacrifice the cover that anonymity grants vulnerable people, journalists, and activists, I think we should come at this issue by placing restrictions on how social media platforms direct people to information. The impulse to restrict and censor individuals rather than restrict powerful organizations profiting from algorithmic promotion of the content you deem harmful is deeply troubling.

    The first step here is simple: identify social media platforms over some size threshold, and require that any content promotion or algorithmic feed mechanism they use is dead-simple to understand and doesn't target individuals. That avoids the radicalization rabbithole problem. Make the system trivial and auditable. If they fail the audit then they're not allowed to have any recommendation system for a year. Just follows and a linear feed (sorting and filtering are allowed so long as they're exposed to the user).

    To reiterate: none of this applies if you're below some user cutoff.

    Q: Will this kill innovation in social media? A: What fucking innovation?

  • makeitdouble an hour ago

    Real life needs full anonymity too. Not everywhere, but it's critical to have some.

    For instance a political vote needs to be anonymous. Access to public space typically is (you're not required to identify to walk the street) even if that anonymity can be lifted etc.

    Real life is complex, and for good reasons, if we want to take it as a model we should integrate it's full complexity as well.

  • krapp an hour ago

    Kiwifarms is an obvious object lesson in why anonymity online is necessary, and hardly the only one.

    • creata 3 minutes ago

      I agree with you, but it's funny that someone else could say the opposite (i.e., that Kiwifarms shows how anonymity lets people get away with saying and doing horrible things) and still sound reasonable.

profsummergig 2 hours ago

I used to be disappointed in myself that I didn't understand Discord well enough to use it.

Now I'm glad I never understood it well enough to use it.

  • stevage 2 hours ago

    Huh. I'm on a few discords. They're very easy and obvious to use, and I really enjoy them. And because they are generally well divided by channel, it's easy to avoid the bits you don't want.

api 2 hours ago

It's more specific than social media. It's engagement maximizing (read: addiction maximizing) algorithms. Social media wasn't nearly as bad until algorithmic engagement maximizing feeds replaced temporal or topic based feeds and user-directed search.

Two people walk past you on the street. One says "hi," and the other strips naked and smears themselves with peanut butter and starts clucking like a chicken. Which one maximizes engagement?

A politician says something sane and reasonable. Another politician mocks someone, insults someone, or says something completely asinine. Which one maximizes engagement?

This is why our president is a professional troll, many of our public intellectuals are professional trolls, and politics is becoming hyper-polarized into raging camps fixated on crazy extremes. It maximizes engagement.

The "time on site" KPI is literally destroying civilization by biasing public discourse toward trash.

I think "trash maximizes engagement" should be considered an established fact at this point. If you A/B test for engagement you will converge on a mix of trolling, tabloid sensationalism, fear porn, outrage porn, and literal porn, and that’s our public discourse.

scarface_74 3 hours ago

I really hate the narrative that social media has increased polarization knowing that my still living parents grew up in the Jim Crow south where they were literally separated from society because of the color of their skin.

The country has always been hostile to “other”. People just have a larger platform to get their message out.

  • linguae 3 hours ago

    As someone whose grandparents endured Jim Crow, I largely agree in the sense that social media did not create America’s divides. Many of the divides in American society are very old and are very deep, with no easy fixes.

    Unfortunately algorithmic social media is one of the factors adding fuel to the fire, and I believe it’s fair to say that social media has helped increase polarization by recommending content to its viewers purely based on engagement metrics without any regard for the consequences of pushing such content. It is much easier to whip people into a frenzy this way. Additionally, echo chambers make it harder for people to be exposed to other points of view. Combine this with dismal educational outcomes for many Americans (including a lack of critical thinking skills), our two-party system that aggregates diverse political views into just two options, a first-past-the-post election system that forces people to choose “the lesser of two evils,” and growing economic pain, and these factors create conditions that are ripe for strife.

    • dfxm12 2 hours ago

      Unfortunately algorithmic social media is one of the factors adding fuel to the fire

      Saying social media fans the flames is like saying ignorance is bliss. Mainstream media (cable news, radio, newspapers, etc) only gives us one, largely conservative, viewpoint. If you're lucky, you'll get one carefully controlled opposing viewpoint (out of many!). As you say, our choices are usually evil and not quite as evil.

      Anger is not an unreasonable reaction when you realize this. When you realize that other viewpoints exist, the mainstream media and politicians are not acting in anyone's best interest but their own, there really are other options (politically, for news, etc.). Social media is good at bringing these things to light.

      There are no easy fixes to the divides you're talking about, but failing to confront them and just giving in to the status quo, or worse, continuing down our current reactionary transcript, is probably the worst way to approach them.

    • scarface_74 2 hours ago

      So there wasn’t enough fuel in the fire when marauding Klansmen were hanging Black people?

      It was the current President of the US that led a charge that a Black man running for President wasn’t a “real American” and was a secret Muslim trying to bring Shari law to the US and close to half of the US was willing to believe it.

      https://www.youtube.com/watch?v=WErjPmFulQ0

      This was before social media in the northern burbs of Atlanta where I had to a house built in 2016. We didn’t have a problem during the seven years we lived there. But do you think they were “polarized” by social media in the 80s?

      That’s just like police brutality didn’t start with the rise of social media. Everyone just has cameras and a platform

  • tolerance 3 hours ago

    > The country has always been hostile to “other”. People just have a larger platform to get their message out.

    And a consequence of this is that some people’s perspective of the scale of the nation’s hostilities is limited to the last 5 years or so.

  • nextaccountic 2 hours ago

    One of the factors that led to the Rwandan genocide was the broadcast of the RLTM radio station

    https://en.wikipedia.org/wiki/Rwandan_genocide#Radio_station...

    The radio didn't create the divide, and it wasn't the sole factor in the genocide, but it engrained in the population a sense of urgency in eliminating the Tutsi, along with a stream of what was mostly fake news to show that the other side is already commiting the atrocities against Hutus

    When the genocide happened, it was fast and widespread: people would start killing their own neighbors at scale. In 100 days, a million people were killed.

    The trouble with social media is that they somehow managed to shield themselves from the legal repercussions of heavily promoting content similar to what RTLM broadcast. For example, see the role of Facebook and its algorithmic feed in the genocide in Myanmar

    https://systemicjustice.org/article/facebook-and-genocide-ho...

    It's insane that they can get away with it.

    • scarface_74 2 hours ago

      And there wasn’t a history of genocide of other before then? Hitler in Germany and the mass murder in Tulsa in 1921 didn’t need social media.

      History has shown people don’t need a reason to hate and commit violence against others.

      • ants_everywhere 8 minutes ago

        I think you're underestimating the role deliberate propaganda has played in mass murder.

        Propaganda and ideology were a major part of the Nazi rise to power.

        Marx, Engels, and Mussolini were all in the newspaper business. Jean-Paul Marat's newspaper was very influential in promoting the French reign of terror, including some claiming he's directly responsible for the September Massacres. Nationwide propaganda were major priorities day one to Lenin and after him in Soviet Russia.

        Similarly with the Cambodian genocide, Great Leap Forward, Holodomor, etc.

        Propaganda even played a big role in Julias Caesar's campaign against the Gauls some 2 millenia before social media.

      • macintux an hour ago

        People don’t need guns to kill, either, but that doesn’t mean that they don’t make for more effective weapons.

  • gdulli an hour ago

    But we made progress away from that and now we've regressed back towards it recently, aided by social media.

    • scarface_74 38 minutes ago

      Exactly when did we make progress? In 2008 - before social media really took off how much of the population was a yelling that a Black man wasn’t a “real American” and was a “secret Muslim”?

      Before then we had the “Willie Horton ads”. Not to mention that Clinton performatively oversaw the electrocution of a mentally challenged Black man to show that he was tough on crime.

      https://jacobin.com/2016/11/bill-clinton-rickey-rector-death...

      Yes I know that Obama was also a champion of laws like the defense of marriage act. We have always demonized other in this country. It was just hidden before.

  • jwilber 3 hours ago

    The article mentions this. It tries to argue the significance of that platform.

johnea 3 hours ago

Man, blah, blah, blah...

That article needs to have about 80% of the words cut out of it.

When the author straight up tells you: I'm posting this in an attempt to increase my subscribership, you know you're in for some blathering.

In spite of that, personally I think algorithmic feeds have had a terrible effect on many people.

I've never participated, and never will...

793212408435807 2 hours ago

Number 3 will shock you!

What a shame that these clickbait headlines make it to the front page.

epolanski an hour ago

Looking at this very comment section the author may have a point.