kelseyfrog 18 hours ago

Wild how out-of-bounds it apparently is to say, but even if age verification was empirically proven to protect kids, I’d still be against it.

It's taboo in our culture to say this, but what keeps me up isn't just what people are afraid of; it's how far they’ll go to feel safe. That’s how monsters get made.

We’ll trade away the last scraps of online anonymity and build a legally required censorship machine, all for a promise of safety that's always just out of reach. And that machine sticks around long after anyone remembers why it was built, ready to be turned on whoever’s out of favor next, like a gun hanging above the door in Act One.

But say this out loud and suddenly you're the extremist, the one who "doesn’t care about kids." We’re already past the point where the "solution" is up for debate. Now you just argue over how it'll get done. If you actually question the wisdom of hanging surveillance over the doorway of the internet, you get boxed out, or even labeled dangerous.

It's always like this. The tools of control are always built with the best intentions, then quietly used for whatever comes next. History is clear, but polite society refuses to learn. Maybe the only real out-of-the-box thinking left is not buying the story in the first place.

  • felindev 16 hours ago

    > We’ll trade away the last scraps of online anonymity and build a legally required censorship machine, all for a promise of safety that's always just out of reach.

    Good example would be EU's proposed "chat control" regulation. Wiretapping every (even encrypted) channel for off chance illegal material might be shared.

  • Eddy_Viscosity2 11 hours ago

    > The tools of control are always built with the best intentions

    Not to be overly pessimistic, but I'd say tools of control are only occasionally built with the best intentions. Normally they are built with, though maybe not the worst but certainly bad intentions. Good intentions are the marketing spin that comes after the fact to ease adoption like lube on blunt object headed in your nether regions.

  • CJefferson 16 hours ago

    Can you explain why you think the online world should be so different to the physical world, which is full of places where I need to use an ID to get age-limited items? I really don't feel monsters were made by stopping children buying porn and alcohol. Do you think those age restrictions should be removed too?

    Particularly as more of society moves from physical to virtual.

    • fellowniusmonk 16 hours ago

      If bouncers copied my id, my home address and a bunch of private data every time I went to a bar I'd never go out.

      This whole premise is absurd. There is tons of research and empirical and historical evidence that living in a surveillance state stifled free expression and thus narrows the richness of human creation and experimentation.

      How old are you that you think constant surveillance is any kind of way to live? It's a thin gruel of a life.

      • lif 4 hours ago

        This seems like such a lost cause to carry on about. The fact that the post originates from a what appears to be an furry-aligned individual is probably not going to help get a majority of people to be sympathetic.

        There appears to be no formidable organized resistance against the recent decades of surveillance boom.

        With tech and many tech employees actively accelerating surveillance.

        Horrible? yes. And extremely unlikely to be rolled back anytime soon. (Disagree? I'd love to believe you are right!)

      • CJefferson 14 hours ago

        I'm happy to discuss different methods of age verification, I agree I don't want websites copying my ID.

        However, I'm not against the concept of age verification, and believe it can be done well.

        How old are you, to make that last comment just because you need your ID to buy a beer?

        • rpdillon 6 hours ago

          > just because you need your ID to buy a beer

          You keep making this comparison, but it's not appropriate. The closest real-world analogy: in order to buy alcohol, you need to wear a tracking bracelet at all times, and be identified at every store you enter, even you you choose to purchase nothing. If our automated systems can't identify you with certainty, you'll be limited to only being able to do things a child could do.

          And the real world has a huge gap between a child and an adult. If an 8-year old walked into Home Depot and bought a circular saw, there's no law against it, but the store might have questions. If a 14-year old did it, you might get a different result. At 17, they'd almost certainly let you.

          The real world has people that are observing things and using judgement. Submitting to automated age checks online is not that.

          • CJefferson 6 hours ago

            It's appropriate (to me) as a limit society has decided it wants, and we should consider if there is a reason similar limits should, or should not, apply to the internet. The whole article we are discussing is how that could be implemented in a much more privacy-sade way.

            • rpdillon 6 hours ago

              But my point is that it won't be. The laws are getting passed, and there is no privacy preservation, there are no ZKPs, there's nothing except "submit your ID". You keep holding out for good faith, but the folks making the rules aren't acting in good faith. I very much appreciate the discussion here, but I think we're coming into the discussion with a different set of priors, so even if our values match, we might not agree.

      • teddyh 16 hours ago

        > It's a thin gruel of a life.

        But it’s the only one they will ever know.

    • rpdillon 6 hours ago

      > online world should be so different to the physical world

      If you take a step back, they are _very_ different, in myriad ways. But to answer your question very concretely: because we're turning the web into a "Paper's Please" scenario and the analogy with "I'm 12 but I can't walk into this smoke shop" doesn't hold. I shared a story on HN that didn't take off about how Google is now scanning _all_ YouTube accounts with AI, and if their AI thinks you're underage, your only recourse after the "kid limit" your account is to submit a government-issued ID to get your account back.

      This has nothing to do with buying cigarettes and alcohol. This is about identifying everyone online (which advertisers would be thrilled about), and censoring speech. In short, the mechanisms being used online are significantly more intrusive than anything in the real world.

      https://news.ycombinator.com/item?id=44740409

      • CJefferson 6 hours ago

        I'm happy to discuss they are very different, and I agree the current systems (in the UK in particular), are awful.

        However, I think tech people risk losing this battle by saying (it seems to me, and in the post I originally replied to) "any attempt at any age checking on the internet is basically 1984", rather than "we need some way of checking some things, keeping people's privacy safe, this current system is awful."

        Of course, if some people believe the internet should be 100% uncensored, no restrictions, they can have that viewpoint. But honestly, I don't agree.

        • rpdillon 6 hours ago

          I'm a huge proponent of legislation that requires sites so send a header that indicates they are serving adult content in that request. I'm also a huge proponent of basic endpoint security that allows a parent to put the device into a mode that checks for those headers and blocks the response.

          This doesn't require any of the draconian 1984 measures that folks are insisting upon. The problem is there is no real incentive to implement true age-verification in this manner (this is why nobody has deployed ZKP), but rather to identify everyone. So while it would be ultra easy to imagine an onboarding scenario during device setup that asks:

          1. Will this device be assigned to a child?

          2. Supply the age so we can track when they cross over 18

          3. Automatically reject responses with the adult header and lock down this setting

          But Google and Apple won't do that, because they don't care, and the politicians won't bake it into their laws, because they don't are either: their goal is to alter culture, and protecting children is just an excuse.

    • techjamie 8 hours ago

      The issue is, it's not feasible to enforce these sorts of bans because the internet is too vast. Yes, you can stop people from visiting PH or any of the big sites, but for every big porn sites, there will be thousands of fly by night ones looking to make a buck. Age verification laws create a market for such sites which can be ran out of jurisdictions that the law can't control.

      So next, we better make the devices age gate their users with attestation and destroy people's ability to use open operating systems on the web. Maybe for good measure we tell ISPs to block any traffic to foreign sites unless the OS reports that attestation.

      But people are using VPNs to bounce traffic to other countries anyway, so now we need to ban those. But people still send each other porn over encrypted channels so we need to make sure encrypted platforms implement our backdoor so we can read it all, on top of on-device scanning which further edges out any open source players left in the game.

      At what point do we stop chasing the rabbit?

    • jrflowers 16 hours ago

      > Can you explain why you think the online world should be so different to the physical world

      When you show a bartender your ID to buy a beer they generally don’t photocopy it and store it along with your picture next to an itemized list of every beer you’ve ever drank

  • throwaway290 13 hours ago

    > if age verification was empirically proven to protect kids, I’d still be against it.

    It's really wild. Imagine a hypothetical ideal implementation. ZKP. No privacy issues, completely safe. And yet people are STILL against it. I can understand pro privacy advocates but I really don't know what kind of person would think this.

    What's extra wild is there's no justification given for this in your comment. There's some completely unrelated stuff about censorship and anonymity. The point per headline is no privacy issues, you get to keep your privacy.

    • Eddy_Viscosity2 10 hours ago

      They are against it because they know a bait-and-switch set up when they see it. The people promoting this hardest are not concerned about child safety, what they want is a monitoring/censorship system for the internet for the in vast political and economic control it would enable. Even if they did implement a perfect ZKP initially, it would not be long before it got diluted and eventually becomes a full-on tracking system.

    • ygritte 11 hours ago

      > Imagine a hypothetical ideal implementation.

      You say it yourself, it's hypothetical. In reality it will be one in a way that enables all kinds of abuse and security issues.

    • kelseyfrog 7 hours ago

      The history on this is really clear: when you create a "for your safety" speech control, it gets used for all sorts of other stuff.

      In my dataset of free-speech limiting examples for safety reasons, 89% eventually expanded in scope to limit speech relating to LGBTQ, feminist, women's health, and politics. This isn't a hypothetical - it happens over and over again. Each time we have folk pointing this out, and each time we have people saying, "You're overreacting."

      ZKP or not, if you make Chekhov's gun, someone's going to use it. Privacy isn’t the point. Unless your ZKP also magically prevents scope creep and political misuse, hard pass from me.

      It’s like going to the gunsmith and saying, "Don’t build Chekhov another gun, you know it’s going to go off," and he just shrugs and says, "There’s no way it happens a twentieth time."

      • rpdillon 6 hours ago

        We don't always agree, but this is spot on. Administrations change, but system of surveillance and control persist. Just because we can imagine a way for a system to do good does not mean it will mostly be used for good.

      • throwaway290 3 hours ago

        Can you overview how this limits free speech? And where do you find examples similar to age checks in the past for it to be checkov's gun?

        When you say even if this helps protect the children (which from what I've seen, probably yes) you are against it anyway, I would put your objection before the end of that sentence

        • kelseyfrog 2 hours ago

          I'm going to give you a homework assignment. Find twenty examples of free-speech limiting laws, policies, or practices. Then go through each one and determine if it was also used to limit political speech, feminist or LGBTQ related speech, or information on women's health. Report back with your findings.

  • Tadpole9181 16 hours ago

    > But even if age verification was empirically proven to protect kids, I’d still be against it.

    Even with an effective implementation via something like zero knowledge proofs? It seems like it's entirely reasonable to say your position is (in this hypothetical) objectively wrong?

    Like arguing that even if we know firefighters save lives, you'd still be against it, because "fear and the desire to feel safe are how monsters are made".

    I disagree with these policies (because they aren't safe and I disagree that children in a danger best prevented through this kind of measure), but I also disagree with you vehemently. If I'm wrong and we can genuinely prevent harm and the worst cost is an inconvenience (again, without the risk of data leak), then I'm wrong and we should do it.

    • jaredklewis 16 hours ago

      The parent is not saying they’re against protecting kids; the parent is saying even if these measures do protect kids (which is disputable) they’re against them because of the side effects. For example, it’s a wedge to let governments and corporations de-anonymize the internet and snoop on everyone.

      Not hard to imagine that kids in North Korea are exposed to less web porn. Doesn’t mean we want to live in NK.

      • Tadpole9181 16 hours ago

        TFA is talking about how to do it without the side effects. In light of that context, saying that you do not care if it is proven that it is a good thing to do and you can do it safely, is objectively the wrong take. You'd be hurting people for no reason?

        • jaredklewis 8 hours ago

          I think zero knowledge proofs still have some side effects. For sure, it's a massive improvement on the measures being proposed and implemented now.

          Even TFA recognizes that a good zero knowledge proof system doesn't eliminate all risk; it just reduces and shifts it.

        • electricboots 16 hours ago

          Just because you say it protects the children, doesn’t convince me it does. I think side effects is doing some heavy lifting, far beyond the scope assured anonymity, which seems questionable on its own, for this to be a reasonable hypothesis

          • Tadpole9181 9 hours ago

            Please read comments you reply to.

            They brought up the hypothetical that if it empirically proven that it is true.

            And I explicitly said I don't think that's the case.

        • jack1243star 16 hours ago

          You seem to swing about the concept of "objectively wrong" a lot, which I cannot understand. How can an opinion be objectively wrong?

          • Tadpole9181 9 hours ago

            To just go for the extreme: murdering someone for no reason is objectively wrong.

            The hypothetical that was brought up stops this from being an opinion and moves it into plain fact territory. If you can prevent harm with no downside, doing nothing is not an opinion, it's pretty clearly just immoral.

    • vaylian 16 hours ago

      > Even with an effective implementation via something like zero knowledge proofs?

      Someone is still controlling the execution of this proof. It's possible to deny people access to gated information. It's not about protection. It is about control.

perihelions 16 hours ago

From this week's news, US prosecutors are subpoenaing a list of every person who attended a gay drag show—under a law ostensibly written to effectively enforce age limits for "adult, sexualized" content[0,1].

You cannot separate the social context from the technical problem; or pretend that if you've designed a cryptographic protocol in some Platonic model reality, you've also solved some real problem in the real world. These things are privacy footguns because people want them to be privacy footguns—they're constructed that way, intentionally. The lack of privacy, the deterrent potential of public shaming, is a desired feature for many of the people pushing these things.

The error is in assuming that privacy is a common, shared value people agree on—a starting point for building technical solutions to. It isn't. It's an ideological dividing line.

[0] https://www.eff.org/deeplinks/2025/07/you-went-drag-show-now... ("You Went to a Drag Show—Now the State of Florida Wants Your Name ")

[1] https://apnews.com/article/florida-drag-show-law-vero-beach-... ("Florida’s attorney general targets a restaurant over an LGBTQ Pride event")

> "Just like the Kids Online Safety Act (KOSA) and other bills with misleading names, this isn’t about protecting children. It’s about using the power of the state to intimidate people government officials disagree with, and to censor speech that is both lawful and fundamental to American democracy... EFF has long warned about this kind of mission creep: where a law or policy supposedly aimed at public safety is turned into a tool for political retaliation or mass surveillance. Going to a drag show should not mean you forfeit your anonymity. It should not open you up to surveillance. And it absolutely should not land your name in a government database."

coppsilgold 18 hours ago

I don't see how a scheme where you allow the generation of multiple tokens will be practical when the token itself has value decoupled from the concerns of the generator - such as when the token doesn't give access to your personal account.

If the token signifies you are 18+ and nothing else and if the generation limits are such as to be reasonable then people will generate some fraction of their total tokens just to sell them, or use their elderly relative's tokens.

The kids will be trading these tokens to each other in no time. Token marketplaces will emerge. The 18+ function of the token will just become a money/value carrier.

If you limit it to one token per person, the privacy implications will be devastating. All online presence where being 18+ is required will be linked.

  • felindev 16 hours ago

    As the saying goes "optimal amount of fraud is non zero"

wkat4242 18 hours ago

I'm not on board with age verification at all. Even if it can be done in a private way. I'll just VPN or something, as I'm in the EU and they're dumping this crap on us now.

I'm more than old enough for anything and I have never been 'carded' in my life. In fact I rarely carry ID anyway (even though it's mandatory). Not going to start now.

  • boneitis 17 hours ago

    Right. There's still something I found unsettling about performing searches without restraint on Kagi (which, until recently, absolutely required being logged in) that I wouldn't have thought twice about on a common search engine.

    Unfortunately, the VPN experience has been deteriorating quickly as BigCo and BigGov have been catching up in natural escalation.

    • BriggyDwiggs42 17 hours ago

      The next thing is probably a vps hosted vpn right?

      • boneitis 17 hours ago

        well, given the pervasiveness of KYC requirements these days, i reckon that would still feel not unlike being required to log in in order to use a search engine.

        moreover, it's already fairly common for web service operators to proactively block/shadowblock swaths of VPS ranges.

        • BriggyDwiggs42 15 hours ago

          >it's already fairly common for web service operators to proactively block/shadowblock swaths of VPS ranges.

          Ah damn. I was hoping that would be a good fallback.

          • boneitis 3 hours ago

            I wouldn't call it a "good" fallback, but i do have a VPS handy with an always-on squid proxy (remember to bind only on localhost and use via ssh tunnel, or some other secure method, if anyone is going to get ideas from this comment) among the other things i use my VPS for.

            I do find that different subsets of services tend to get blacklisted.

      • sshine 14 hours ago

        VPS’es aren’t great for running VPNs from because their IP addresses are so obviously not residential.

  • NitpickLawyer 17 hours ago

    Eh, it's still tricky. Visiting from a VPN gets you subpar experiences in around 30-50% of sites, I would say. From search engines that rate limit you to one or two searches per hour, to things like spotify simply not working. Forums, social media & co that aren't doing verification will also throttle you, shadow ban you and so on.

    I get why some sites use these kinds of IP filtering, but the net result is sadly bad for anyone trying to do this.

    • general1726 16 hours ago

      So future actually is self-hosted.

AngryData 16 hours ago

The only thing we need and should accept is websites putting a content flag on their website or apps that any child restriction software or addons can read and either allow or block. It is a parent's job to limit what their children access, it is not the governments job to rubber pad the entire world so you can just let your kids run around like feral pigs.

LegionMammal978 21 hours ago

Even with a "privacy-preserving" mechanism, I'd remain worried about censorship risk. Are you a government, and you want to punish one of your citizens without lifting a finger? Then deny them the ability to verify their ID with anything!

In principle, you could probably cook up some mechanism to prevent this. But then the information would also be irrevocable in case of error, which I doubt governments would accept. Not that ID verification is a foolproof proxy for the actual physical user in any case, short of invasive "please drink verification can"-style setups, which I worry might look tempting.

  • magicalhippo 19 hours ago

    My reading of the EU proposal has licensed third parties doing the age verification step.

    The gov't could threaten to revoke the license, but doing so would inconvenience all their users, not just the target. So the third party has leverage to dismiss the gov't.

    Of course lots of factors in play, but should be at least a bit better than the gov't doing the age checks.

    • LegionMammal978 18 hours ago

      At least in the U.S., the experience is that businesses will do a lot of things if some level of government 'politely' asks them to. "This account is fraudulent, please delete it." (Or perhaps by waving the stick of "for reasons of national security".) The business doesn't really have any incentive to get in a fight over it, especially if the target wouldn't look sympathetic in the media. I haven't heard much suggesting that typical EU businesses are any different in this regard.

  • progval 18 hours ago

    > Then deny them the ability to verify their ID with anything!

    Then it's up to legislators to make this illegal. Or at least restrict it to specific purposes, and with a judge's approval.

averysmallbird 19 hours ago

The pro-age verification folks have been talking about ZKPs for years now. Here’s one of the legal proponents of the Texas law, and now General Counsel at the FCC, referencing ZKPs[1]. More sophisticated folks have been pitching actual implementations for a while.

Setting aside whether age verification is desirable or a net benefit, some of the discourse is colored by folks that want to make it as painful and controversial as possible so they don’t have to do it.

[1] https://americarenewing.com/issues/identity-on-the-internet-...

  • rpdillon 6 hours ago

    And yet the laws get passed with no working ZKP system in place. Telling.

jmogly 20 hours ago

To get to the gist; you shouldn’t need to show pornhub your ID to verify your age. You should be able to verify your age with an identity provider that issues you a signed token for example.

The signed material does not contain any identifiable information about you, and sites like pornhub can verify the token with the identity provider to verify your age.

  • Borealid 18 hours ago

    What stops an adult from creating these anonymous tokens and then letting others use them?

    • SkyeCA 18 hours ago

      What stops an adult from buying alcohol and letting an under-18 drink it?

      • guappa 16 hours ago

        Nothing, which is why these laws are useless (to protect children), but very useful to monitor everyone.

      • tshaddox 16 hours ago

        What answer are you even looking for? There’s no proactive law enforcement waiting to bust down your door if you give underage kids alcohol. (Note this is true of nearly all crimes.)

        But if a kid dies of alcohol poisoning or drunk driving, you can certainly get in serious legal trouble. Those two things (not wanting kids to be harmed by alcohol, and not wanting legal trouble) stop a very large number of adults from giving minors alcohol.

        • SkyeCA 9 hours ago

          I am simply making the point that no verification system is going to be perfect, it's an impossible goal.

          I also would personally prefer we not destroy the internet in pursuit of that goal.

    • CJefferson 13 hours ago

      What stops adults from giving children drugs and alcohol?

      You put severe penalties on the crime, then you catch people doing the crime. Offer a reward fo catching people, and I'm sure a few kids will hand people in for the reward. They'll be able to prove they got a token from someone (as they'll have it), then we investigate.

    • dbetteridge 18 hours ago

      Make them extremely time limited like an OTP token?

      Require a token to be provided by the requester that is used to sign the response token so its limited to a single use

      • Tadpole9181 16 hours ago

        Tokens need to be single use or you create a new side channel already. Time limiting is also a challenge without creating a side channel, though possible with a similar mechanism to 2FA.

  • Lio 17 hours ago

    Will the identity provider be able to match my token with me?

    How does it guarantee anonymity?

    i.e. in this scenario will they know that my token was passed to PornHub?

    • Tadpole9181 16 hours ago

      No, they do not provide a raw token you forward to the relying party, so it cannot be looked up later. And the data they do provide is compared against a public key that guarantees it is non-unique to you.

      Look up Zero Knowledge Proofs, or Kagi's Privacy Pass, if you want to see details.

  • taneq 18 hours ago

    This is an improvement because only the identity provider(s) have your ID, but now you also have a central database of all the age-verification-requiring sites that many people use, along with those peoples’ ID.

    You could argue that the sites requesting access tokens won’t be cached/s but I’m practice that’s not how it’ll work. You could also have a separate request-forwarder service that sits between the age-verifier and the site-that-you-don’t-want-logged, I guess, which would make it harder to get all the required info in one place.

CobrastanJorji 18 hours ago

I totally agree with the author's main point: if we must do "age verification," we should do it through third party identity providers and not directly give our information to everyone.

I have a semantic question, though. If I get tokens from an identity provider which I then pass to an adult website, is that really a "zero knowledge" proof? It's been a while, but I don't think that's a zero knowledge proof. Or maybe it is? I'm not sure what the formal definition is.

  • Tadpole9181 16 hours ago

    Yes, it's zero knowledge as long as you don't consider knowledge of which provider you use to be knowledge. Which, if used at scale, shouldn't be.

    The token isn't one that you receive and use as-is, so there's no way for the token's generator to tie it to your identity. And when redeemed, the generator can only confirm the token is valid, not that you made it (and therefore what service you're using). Kagi has some articles on the technical details for their "Privacy Pass" feature.

    However, using a VPN and pre-generating tokens is still recommended to prevent side-channel attacks based on timing.

kelseydh 18 hours ago

One of the most ridiculous things about age verification are the assumed ages for using things. For example, recommended Age Ratings for movies way overestimate the age somebody should be to watch things. I was watching NC-17 movies at the age of 7. Powerful experience, but I grew up a normal person. I still remember being 10 and thinking how ridiculous the PG-13 and R classifications were to the level of maturity I already had at that age. Thankfully I had parents who didn't care and I could watch whatever I wanted.

  • riffraff 18 hours ago

    Kids are generally way more resilient than they get credit for, but not all kids are the same.

    I have two, one of them was fine watching people's faces melting in Raiders of the lost Ark when he was 6, the other had nightmares for a couple days after seeing Gollum in LoTR.

    The regulations on age are by necessity arbitrary, but I don't think they're completely stupid, even tho I agree parents should be the one responsible in the end.

  • tshaddox 16 hours ago

    Those content ratings are mostly information for parents who feel strongly about what content the allow their children access to.

renegat0x0 17 hours ago

The real question is if it will stop at this point, or is it only the first step.

The next question is if that will work at all. Those that want to find it - they will. If that is true, then why is this verification in place at all?

  • trhway 17 hours ago

    The end game is naturally to have all your online activity to be associated with your real ID. The government wants this, Big Tech wants this, thus there are no real barriers and, in a frog boiling way, it will be done at least for the majority of users.

_zoltan_ 17 hours ago

in around 2013ish I've worked on a ZKP based SAML-like authentication scheme where almost nobody knows anything: - you could use your corp ID to log in to pornhub, as the provider doesn't know to whom it verifies the request

- pornhub wouldn't know you used your corp ID

we got as far as a demo out of it but never commercialized as far as I know.

this was after there was a trial project with the UK about ZKP based age verification as kinda the next step where you could verify more than your age online.

  • MattPalmer1086 15 hours ago

    Interesting - did anyone write up how it worked anywhere?

    I also worked on a similar system in 2015, which provided anonymity and unlinkability in almost all interactions (you don't know who it is, and you don't know whether the anonymous user is the same one you saw last time).

    You did have to pay for the service of course, but it issued blind signature tokens for access (similar to what is described in the article). So the service did not know who actually did what.

    It could also provide anonymous attestation of some attribute (like age). This was a bit more efficient and secure in that you did not need to store a bunch of tokens. It could transform the proof to be unique each time (thus giving unlinkability). It would only work if you had access to your private keys (so you could not just give your age proof token to a kid - you would have to give them your entire account and keys).

4b6442477b1280b 17 hours ago

it is designed to be a privacy footgun. this wave of age verification bullshit is their foot in the door for "login with your government-issued ID". anonymous rabble congregating on the internet, spreading malinformation and expressing illegal opinions are extremely dangerous to our democracy. the ETA is 5-20 years until another wave of "safety" laws that will require your real identity to be linked to every clearnet website you interact with.

teddyh 16 hours ago

This would be great and all, but all parties who are in a position to choose to implement this kind of system or to keep the status quo are already motivated to keep (and expand) the existing systems, for any number of reasons. Everybody (except the end users) loves to keep that juicy metadata and incidental logs of everything.

(Quoting myself from 2021: <https://news.ycombinator.com/item?id=26538052#26560821>)

verisimi 18 hours ago

The problem with de-anonymising the internet is that I don't think the potential risk (my id becoming public if the id provider is hacked) is worth the potential good (preventing kids from adult experiences online). Is that position ok? Do I have an ability to avoid that risk? When was the case that 'we must do agree verification' proven? If it wasn't, what exactly is going on?

So, I don't accept that this is even an acceptable idea. I hate that we are attempting to 'solutionize' on top of bad assumptions, as with this well-meaning article.

The real issue is that there is no proving that this is a 'good thing' to be done -there is no discussion of the loss of privacy rights. It is already decided that de-anonymising is a good thing for corporations and governments, so the rest is just excuses.

This is actually use of manipulation on the part of governments to trick and coerce individuals into an action they do not want to take. Therefore, thoughtful talking about how to 'mitigate the risk' is the equivalent of negotiating with kidnappers over the ransom, when the right answer is: no coercion. The answers to these questions should be that those who want them opt-in, not forcing risk on everyone.

floppiplopp 15 hours ago

The corpos and people lobbying for this "age verification" aren't interested in child protection. They are just abusing child protection for their own gain: data collection. There is safe and secure tech to verify age without sending personal data. But the same lying assholes who claim to do it for the children are just vile agents of surveillance capitalism.