This loophole, “think of the children,” would not exist if SV had gotten over itself and not called very solution unworkable while insisting that any solution parents receive, no matter how sloppy or confusing, is workable.
Here's the actual title of the article, which is much more concerning than the HN title.
The worst that can happen is you don't change things.
The best? Maybe you'll find a receptive ear. Your lawmaker stops co-sponsoring KOSA. Your state AG stops pushing for it.
The idea would be that devices could "opt in" to safety rather than opt out. Allow parents to purchase a locked-down device that always includes a "kids" flag whenever it requests online information, and simply require online services to not provide kid-unfriendly information if that flag is included.
I know a lot of people believe that this is just all just a secret ploy to destroy privacy. Personally, I don't think so. I think they genuinely want to protect kids, and the privacy destruction is driven by a combination of not caring and not understanding.
"The attorneys general argue that social media companies deliberately design products that draw in underage users and monetize their personal data through targeted advertising. They contend that companies have not adequately disclosed addictive features or mental health risks and point to evidence suggesting firms are aware of adverse consequences for minors."
Okay, so why aren't they going after the social media companies?
That doesn't mean they should get what they might want, or that its Constitutional.
You can’t illegally retaliate against citizens if you don’t know where they sleep at night.
Aren't there sound reasons to support anonymous whistleblowing?
Would there be critical feedback without pseudo-anonymity on the internet?
But you folks just have to dom all the haters.
What is their favorite thing: stuffed animal brand, candy, musical artist?
But then wouldn't undercover ops be obvious?
Is this similar to the "ban all crypto" movements that periodically forget everything we've learned about infosec and protecting folks?
Do protectees' deserve privacy for their safety?
In the 1990s, they told us kids not to use our real names or addresses on the internet.
"Many social media platforms deliberately target minors, fueling a nationwide youth mental health crisis."
". These platforms are intentionally designed to be addictive, particularly for underaged users, and generate substantial profits by monetizing minors’ personal data through targeted advertising. These companies fail to adequately disclose the addictive nature of their products or the well-documented harms associated with excessive social media use. Increasing evidence demonstrates that these companies are aware of the adverse mental health consequences imposed on underage users, yet they have chosen to persist in these practices. Accordingly, many of our Offices have initiated investigations and filed lawsuits against Meta and TikTok for their role in harming minors. "
Yet, the comapnies aren't being regulated, nor the algorithims, the marketing or even the existence. It's the users that are the problem therefore everyone has to submit their Identity to use the Internet if this passes.
So the worst that can happen could be worse than nothing.
[] https://www.aclu.org/press-releases/department-of-homeland-s...
Some people have enough self control to do that and quit cold turkey. Other people don't even consciously realize what they are doing as they perform that maladaptive action without any thought at all, akin to scratching a mosquito bite.
If someone could figure out why some people are more self aware than others, a whole host of the worlds problems would be better understood.
I wouldn't say it's a lack of understanding, but that any compromise is seen as weakness by other members of their party. That needs to end.
It's incredibly sad as an optimistic person trying to find any silver lining here.
William Tong, Anne E. Lopez, Dave Yost, Jonathan Skrmetti, Gwen Tauiliili-Langkilde, Kris Mayes, Tim Griffin, Rob Bonta, Phil Weiser, Kathleen Jennings, Brian Schwalb, Christopher M. Carr, Kwame Raoul, Todd Rokita, Kris Kobach, Russell Coleman, Liz Murrill, Aaron M. Frey, Anthony G. Brown, Andrea Joy Campbell, Dana Nessel, Keith Ellison, Lynn Fitch, Catherine L. Hanaway, Aaron D. Ford, John M. Formella, Jennifer Davenport, Raúl Torrez, Letitia James, Drew H. Wrigley, Gentner Drummond, Dan Rayfield, Dave Sunday, Peter F. Neronha, Alan Wilson, Marty Jackley, Gordon C. Rhea, Derek Brown, Charity Clark, and Keith Kautz
--
Always operate under the assumption that the people serve the state, not the other way around. There are some names in that list that are outwardly infamous of this behavior, and none are surprising considering what type of person looks to be an AG. Maybe fighting fire with fire is appropriate - no such thing as a private life for any of these people, all their communications are open to the public 100% of the time and there are precisely 0 instances where it is not the case. It's only fair considering that is what their goal is for everyone not of the state.
You need to make it easier for your lawmakers to be on that list too. Show them there's people who won't rake them over the coals for bowing out.
No.
I said your state Attorney General's office and your elected federal Senators and members of the House.
So I reiterate - the worst that can happen is you don't change where things have been going to.
The best? Your elected officials bow out of this.
Even better, make the flags granular: <recommended age>, <content flag>, <source>, <type>
13+, profane language, user, text
17+, violence, self, video
18+, unmoderated content, user, text
13+, drug themes, self, audio
and so on...
also need a more informed citizenry able to see through propaganda.
- It's much easier for web sites to implement, potentially even on a page-by-page basis (e.g. using <meta> tags).
- It doesn't disclose whether the user is underage to service providers.
- As mentioned, it allows user agents to filter content "on their own terms" without the server's involvement, e.g. by voluntarily displaying a content warning and allowing the user to click through it.
Foreign sites, places that aren't trying to publish things for children? The default state should be unrated content for consumers (adults) prepared to see the content they asked for.
It doesn't even matter if you can get something that technically works. Half the "age appropriate" content targeted at children is horrifying brainrot. Hardcore pornography would be less damaging to them.
Just supervise your damn children people.
But I strongly prefer my solution!
putting the consiracy hat on, the exploit is to direct as many installed AGs to push for such bills, with no big letdown if they dont pass, why/because, the demographics on dissention are valuable and are, passed to a hostile federal government.
Parent's proposal is better in that it would only take away general purpose computing from children rather than from everyone. A sympathetic parent can also allow it anyway, just like how a parent can legally provide a teen with alcohol in most places. As a society we generally consider that parents have a right to decide which things are appropriate for their children.
Give in 20+ years and you'll be called a kook for thinking otherwise.
0+, kid friendly, self, interactive content
This could pretty easily be solved by just giving sites some incentive to actually provide a rating.
Also, not all 13-year-olds are of equal level of maturity/content appropriate material. I find it very annoying that I can’t just set limits like: no drug-referencing but idgaf about my kid hearing swear words.
On other machines: I do not want certain content to ever be displayed on my work machine. I’d like to have the ability to set that. Someone who has specific background may not want to see things like: children in danger. This could even be applied to their Netflix algorithm. The website: does the dog die, does a good job of categorizing these kinds of content.
Maybe I will have more energy for it tomorrow, I've been through this probably a couple dozen times on HN and I don't have the energy to go through the whole rigmarole today because usually it results in 2-3 days of someone fiercely disagreeing down some long chain and in the end I provide all the evidence and by that point no one is paying attention and it just goes into this pyrrhic victory where I get drained dry just for no one to give a shit. I should probably consolidate it into a blog post or something.
A bloc of 40 state and territorial attorneys general is urging Congress to adopt the Senate’s version of the controversial Kids Online Safety Act, positioning it as the stronger regulatory instrument and rejecting the House companion as insufficient.
The Act would kill online anonymity and tie online activity and speech to a real-world identity.
Acting through the National Association of Attorneys General, the coalition sent a letter to congressional leadership endorsing S. 1748 and opposing H.R. 6484.
We obtained a copy of the letter for you here.
Their request centers on structural differences between the bills. The Senate proposal would create a federally enforceable “Duty of Care” requiring covered platforms to mitigate defined harms to minors.
Enforcement authority would rest with the Federal Trade Commission, which could investigate and sue companies that fail to prevent minors from encountering content deemed to cause “harm to minors.”
That framework would require regulators to evaluate internal content moderation systems, recommendation algorithms, and safety controls.
S. 1748 also directs the Secretary of Commerce, the FTC, and the Federal Communications Commission to study “the most technologically feasible methods and options for developing systems to verify age at the device or operating system level.”
This language moves beyond platform-level age gates and toward infrastructure embedded directly into hardware or operating systems.
Age verification at that layer would not function without some form of credentialing. Device-level verification would likely depend on digital identity checks tied to government-issued identification, third-party age verification vendors, or persistent account authentication systems.
That means users could be required to submit identifying information before accessing broad categories of lawful online speech. Anonymous browsing depends on the ability to access content without linking identity credentials to activity.
A device-level age verification architecture would establish identity checkpoints upstream of content access, creating records that age was verified and potentially associating that verification with a persistent device or account.
Even if content is not stored, the existence of a verified identity token tied to access creates a paper trail.
Constitutional questions follow. The Supreme Court has repeatedly recognized anonymous speech as protected under the First Amendment. Mandating identity verification before accessing lawful speech raises prior restraint and overbreadth concerns, particularly where the definition of “harm to minors” extends into categories that are legal for adults.
Courts have struck down earlier efforts to impose age verification requirements for online content on First Amendment grounds, citing the chilling effect on lawful expression and adult access.
Despite this history, state officials continue to advocate for broader age verification regimes. Several states have enacted or proposed laws requiring age checks for social media or adult content sites, often triggering litigation over compelled identification and privacy burdens.
The coalition’s letter suggests that state attorneys general are not retreating from that position and are instead seeking federal backing.
The attorneys general argue that social media companies deliberately design products that draw in underage users and monetize their personal data through targeted advertising. They contend that companies have not adequately disclosed addictive features or mental health risks and point to evidence suggesting firms are aware of adverse consequences for minors.
Multiple state offices have already filed lawsuits or opened investigations against Meta and TikTok, alleging “harm” to young users.
At the same time, the coalition objects to provisions in H.R. 6484 that would limit state authority. The House bill contains broader federal preemption language, which could restrict states from enforcing parallel or more stringent requirements. The attorneys general warn that this would curb their ability to pursue emerging online harms under state law. They also fault the House proposal for relying on company-maintained “reasonable policies, practices, and procedures” rather than imposing a statutory Duty of Care.
The Senate approach couples enforceable federal standards with preserved state enforcement power.
The coalition calls on the United States House of Representatives to align with the Senate framework, expand the list of enumerated harms to include even suicide, eating disorders, compulsive use, mental health harms, and financial harms, and ensure that states retain authority to act alongside federal regulators. The measure has bipartisan sponsorship in the United States Senate.
The policy direction is clear. Federal agencies would study device-level age verification systems, the FTC would police compliance with harm mitigation duties, and states would continue to pursue parallel litigation. Those mechanisms would reshape how platforms design their systems and how users access speech.
Whether framed as child protection or platform accountability, the architecture contemplated by S. 1748 would move identity verification closer to the heart of internet access.
Once age checks are embedded at the operating system level, the boundary between verifying age and verifying identity becomes difficult to maintain.
The internet would be changed forever.