> The fake child accounts were allegedly contacted and solicited for sex by the three New Mexico adult men who were arrested in May of 2024. Two of the three men were arrested at a motel, where they allegedly believed they would be meeting up with a 12-year-old girl, based on their conversations with the decoy accounts.
and
> “The product is very good at connecting people with interests, and if your interest is little girls, it will be really good at connecting you with little girls,” Bejar said.
This is what it's about right? The article doesn't make it seem like encryption is meaningfully part of this case at all.
> Midway through trial, Meta said it would stop supporting end-to-end-encrypted messaging on Instagram later this year.
There's no indication that that decision, or the announcement, are directly related to the trial, just they just happened at the same time? It's a link drawn by CNN, without presenting any clear connection
But who gets the $375 million dollars? Anyone know the cut the law firm will get from this incredible amount of money?
Are the kids alright?
> The New Mexico case also raised concerns that allowing teens to use end-to-end encryption on Instagram chats — a privacy measure that blocks anyone other than sender and receiver from viewing a conversation — could make it harder for law enforcement to catch predators. Midway through trial, Meta said it would stop supporting end-to-end-encrypted messaging on Instagram later this year.
The New York case has explicitly gone after their support of end-to-end encryption as a target: https://www.reuters.com/legal/government/meta-executive-warn...
Their stated reason? Child safety.
Their actual reason? You can figure that out.
Absolutely. Particularly where they've been found to be guilty.
> but we should be aware that these cases are one of the key reasons why companies are backtracking from features like end-to-end encryption
Why _social media_ companies are backtracking. I'm extremely nonplussed by this outcome.
> concerns that allowing teens
Yes, because that's what we all had in mind when considering the victims and perpetrators of these crimes.
Harm to kids is actually happening, and this is always going to be a hot button topic.
E2E is critical for our current ability to communicate online, but will be a lower priority when pitted against child safety.
Fighting the good fight is one thing, fighting for the sake of it, without a plan that addresses the tactical reality is another altogether.
Personally, I think E2E will be defended, but it’s becoming a lightning rod for attention. As if removing encryption will solve the emerging issues.
I suspect providing alternatives to champion, such as privacy preserving ways to verify age, will force a conversation on why E2E needs to go.
* Classifying accounts as child accounts (moderated by a parent)
* Allowing account moderators to review content in the account that is moderated (including assigning other moderation tools of choice)
In call cases transparency and enabling consumer choice should be the core focus.
Additionally: by default treat everyone online as an adult. Parents that allow their kids online like that without supervision / some setting that the user agent is operated by a child intend to allow their children to interact with strangers. This tends to work out better in more controlled and limited circumstances where the adults involved have the resources to provide suitable supervision.
At the same time, any requirements should apply only to commercial products. Community (gratis / not for profit) efforts presumably reflect the needs of a given community.
It is better for them to be forced to turn off the security theater so people that need actual privacy can research alternatives.
We know that this isn't really going to reduce harm for children, we know Meta is not seriously going to suffer or change, and we know this is going to be used as a cudgel to beat down privacy and increase surveillance.
We don't need all this privacy invasion if we just didn't give kids a smartphone with a data plan.
> Surveys by Britain’s tech regulator, Ofcom, find that among children aged 10-12, over half use Snapchat, more than 60% TikTok and more than 70% WhatsApp. All three apps have a notional minimum age of 13: https://archive.ph/y3pQO
Once you get the classification correct — and AI cannot it do this — only via community ombudsman/age verifiers, in a privacy first way*, the app stores can easily tell the app devs what accounts are sensitive and filtering should be much more effective.
*Basically once your age is verified by a real human for your device(using device local encryption to verify biometrics) you are set. No kid should be able to bypass and install apps it on devices that their parents hand to them. There will always be black market devices with these apps, but there are ways of beating those to be very minimal by existing tech.
It's ok to drive Dad's truck unless he catches you and tells you no.
Meta has always wanted the appearance of caring about safety (helps them attract talent and keep mission-related morale high), while nearly always prioritizing growth (save for tiny blips of time, like in 2017 when the fallout of the cambridge analytica stuff was hitting a crescendo), whereas companies like X are run by people explicitly disinterested in putting significant resources into safety, especially research.
I will also add that, for the past few years, Meta and X both have become extremely hostile to external researchers of their platforms, shutting down access to tools and data.
I don't like Meta in any sense of the word and I think they've degraded humanity and society as a whole significantly for generations now to come. But I hope my conspiratorial mind is just over reacting.
Who verifies that the person verifying the child's age is actually authorised to do that? Who verifies that verification? And so on up. This needs a chain of trust that can only end up at government. And that chain of trust will then be open to being abused by shitty politicians.
What mechanism in (e.g) Linux is responsible for implementing this age verification so that it cannot be tampered with (or trivially overruled by a sudo call)? Which organisation is legally liable if that mechanism doesn't do its job? How can we stop someone from overwriting that mechanism with their own, in an open OS that is deliberately designed to allow anyone with root to change anything on it?
What you propose here is the death of open computing. And I personally believe that we would be much better off as a species if we kept open computing and just taught our kids how to handle social media better.
Firms have a fiduciary duty to shareholders and profit.
On the other hand, You ultimately decide the rules and goals that operate government organizations, and do not have a profit maximization target.
They aren’t the same tool, and they work for different situations.
The E2EE slippery slope is a different challenge, and for that I have no thoughts
Got away with it again, good profit, will repeat.
I'm hardly the first person to use this logic, but if they make more money breaking the law than they have to pay in fines, then it's not a fine, it's a business expense.
If all 50 states sue at the same rate, that'll be a 30% dent, and I'm sure states can sue for more than 0.6% too. That would be historic action against malfeasance and would send a strong FAFO single to all corporates.
Let's lobby for it.
Reality, folks: you can't have both.
It helps to reduce hegemony of large social platforms and promotes privately owned websites. For example, I know everyone who has permissions to post on my website (or pre-moderate strangers comments), and is ready to take responsibility for their posts, what my website publishes.
Currently the legal stance seems strange to me -- large media platforms are allowed to store, distribute, rank and sell strangers data, while at the same time they claim they are not responsible for it.
Now I'm afraid they've screwed everyone over and the idea of an anonymous open internet is now dead- we're gonna see age (read, real ID) verification gating on every site and app soon....
The dumb thing is to look back and see how umimportant it is that Facebook feed algorithm be this addictive. They already had the network effects and no real competitors. They could have just left it alone.
These platforms expose minors to predators and bad actors, and Meta was proven lying about safety.
You can't realistically make a space that's free from predators. The real answer is teaching children to recognize unacceptable behavior. But most abuse is from inside--typically adults that the parents put in a position of trust or quasi-trust.
I do not fault Meta for there being predators, I fault Meta for pretending they're being kept out.
New Mexico is 0.6% of the U.S. population [1].
[1] https://en.wikipedia.org/wiki/New_Mexico 2.13mm
[2] https://www.census.gov/popclock/ 342mm
Why else would they want to sneakily add facial recognition to smart glasses?! /s https://www.businessinsider.com/meta-ray-ban-smart-glasses-f...
If you know what the platform is capable of, if you seen how the sausage is made, you're probably not using it.
People are also a little naive in not seeing that these platforms aren't just bad for children, they are bad for adults as well. I'm not oppose to not "selling" them to children, but we also need to label correctly for adults and have rules like those for alcohol, tobakko and gambling, so no or limited advertising. Scrub the public spaces of Facebook logos.
They don't care about child safety as long as it doesn't become so bad as to impact their revenue negatively. But they see that governments all over the world push for some kinds of age restrictions, and they know they are a prime target and it is hard for them to push back against that.
The reason they are (not so secretly) lobbying for requiring us to ID ourselves at the device level is that they don't want to be the gatekeepers. They want to make creating an account as effortless as possible and having to prove your age is a barrier that make turn off some people, including adults, and they may instead turn to services that don't require age verification. By moving the age verification in the OS, not only the responsibility shifts to the OS or hardware vendor, but it also removes the disadvantage they have against services that don't require age verification.
For a similar issue, PornHub is currently blocked in France, because they don't want to comply with the law related to age verification. Here is their argument: https://www.aylo.com/newsroom/aylo-suspends-access-to-pornhu...
If you read between the lines, you will see that they have the same stance: "put age verification at the OS level, so that people don't discriminate against us". They know they are not in a position to argue against "child safety" laws, so instead, they lobby for making it worse for everyone instead of just themselves.
[1]: I could be wrong thinking those are benign.
The other one was the time I was speaking to my brother in law, who had just paved his driveway, he said "I could have used airport grade tar, but thought it was too much" and we were in front of his Nest security cam is the only thing I can think of, but the very next morning, I'm scrolling through Facebook, and sure enough, someone local is advertising airport grade tar. Why? I didn't google this, I only heard it from them.
There's some serious shenanigans going on with ad companies, and we just seem to handwave it around.
Coincidentally, I remember both experiences very very vividly, because this was the last time I used either platform in any meaningful capacity.
This is unfalsifiable. Just say what you think it is explicitly.
The “think of the children” angle is the perfect angle to pressure companies to make communications readable by the government. And here tech audiences are welcoming it and applauding because they couldn’t read past the headline and they think anything that hurts Zuck is good.
How anyone can see this happening and not draw the connections to Discord and other services also pushing ID checks is beyond me. Believing that this will only apply to services that don’t effect you is short sighted.
Zuckerberg has a brain, he decided to take this action, it is absurd he is not being hit with a personal penalty.
Trying to approach it from the direction of websites determining if you are an adult is a privacy nightmare and provides a huge attack surface. (Which is what the government wants--the ability to monitor.) Flipping it over is much, much safer--but fails the real mission of exposing dissent.
(On-device security, the credential of the adult is loaded onto the device but not transmitted anywhere, it can only be obtained locally. The device simply responds as to whether it has a credential loaded. Bad guys are unlikely to want to sell such devices as the phone could be traced back to them.)
And the parents can select a strict child lock, or permitted but copies forwarded to the parent.)
Dad should either know his children would never drive the truck without permission, or keep his keys as safe as his wallet (and if he can't trust his kids with keys, you bet his wallet needs protection).
We are at a point where we are picking and choosing collateral damage targets.
Lets admit it, in same vein trump is a symptom of current US society, the approach and effects of social networks we allow them to be is a result of how lazy and thus addicted people got. On top of many of the parents doing exactly the same, then don't expect miracles.
One thing that I don't understand - even here, some folks call that sociopathic amoral piece of shit 'zuck' and treat his empire like some sort of semi-charity. When I attacked facebook company in the past, there was always a lot of defense (look at this open sourced stuff, look at that... which I presume came from either direct employees or clueless stock holders). People are people, deeply flawed and often weak without willingness to admit it to themselves.
Unfortunately, social media users don't have billions of dollars to spend on lobbying and related activities around the world.
Also, “the total civil penalty of $375m was reached after the jury decided there were thousands of violations of the act, each with a maximum penalty of $5,000. Meta is also involved in a separate trial in Los Angeles, in which a young woman claims that she became addicted to platforms like Instagram and YouTube, owned by Google, as a child because of how they are intentionally designed.
There are thousands of similar lawsuits winding their way through the US courts.”
Especially since, when you look at the behavior of younger people, they're way more careful about social media than millennials were. My teenage child an their friends keep all of their conversations in a massive but private group chat. Any social media consumed by them, is basically 'read only'. They don't post online, none of them of have social media accounts where they post pictures of themselves etc.
Same with all of my younger gen-z coworkers. If they have socials the post very selectively and all content is work friendly.
The people I see that need "protection" are aging millenials that don't really understand how wildly they're exposing themselves and families. I cringe when I see the amount of personal photos and information shared by the view millenials I know who still need their ego-boost from these platforms (and that number itself is much smaller).
Younger people don't share their opinion and anything resembling private photos online any more.
The legal system does not seek to destroy the business, or individual criminal. Instead it wants them to be able to continue doing their other non-criminal stuff.
If you don't support this you're obviously a pedo nazi terrorist.
As more and more people essentially lock themselves in with these identitybrokers tho I imagine it has a very stifling effect on speech tho. Imagine getting banned from those.
There are people who are against age verification just on principle and others who are against it because they know any realistic implementation is going to be abused.
We can assume Meta has backdoored its E2EE somehow anyway.
You start slow, then push it the limits
Netflix, never ads to some ads, then eventually its just Adflix, after 20 years.
Each new manager wants that comp up. So ads up by 5% every year.
You can purchase a scam ad it'll be up in 10 minutes. Lie to every anxious child they have ADHD and need meth, lie to every dejected boy that they just need to manosphere up and buy supplements.
They think the public is stupid. They might be right.
Meta's biggest competitor was users' personal lives, not any other web service. They have been ruthless in crushing that competition.
They immunised us.
The state will ask Biedscheid to direct Meta to make changes to its platforms, including adding effective age verification
> Surveys by Britain’s tech regulator, Ofcom, find that among children aged 10-12, over half use Snapchat, more than 60% TikTok and more than 70% WhatsApp. All three apps have a notional minimum age of 13.
Given how current parental controls work, kids are not getting access if their device is under parental control (the default for open web access is off). So Facebook still won't see any child-locked devices, even before this ruling. My guess is that this ruling applies to parents who aren't making sure their kids get access only via child locked devices.
Cancer is a great metaphor because its a perversion of natural, healthy processes. So called social media is nearly that, but actually grotesquely unhealthy.
People are dramatically unwell when they are not social, but that unregulated process is also negative up to and including being lethal.
Option A: The Nest camera not only listened to the conversation and picked out "Airport Grade Tar" and decided it needed to show adverts about it to people, but the camera also identified you to the point it could isolate your FB account in order to serve you those adverts.
(I'm making some assumptions but...)
Option B: Your brother had done various searches for airport grade tar from his home (in order to know how expensive it was). You, whilst visiting his home, were on his Wifi and therefore shared the same external IP address, your phone did enough activity whilst at his house (FB app checked in to their servers in the background, or used Messenger, etc) to get the "thinking of buying airport grade tar" associated with his external IP address associated with your FB account that was temporarily on that IP.
I had a friend who was convinced that some device in his house was listening in on his conversations with his wife as he kept on getting adverts for things they'd been talking about buying the day before but he hadn't searched for. (But she was searching for it from their home wifi, which is why it appeared in his adverts afterwards.)
Currently, websites and apps are supposed to ensure they don't have kids under 13, or if they do - that they have the parents permission. That's federal law in the US.
These laws make the operating system or app store (depends on the particular law) responsible for being the age gate.
This doesn't stop the federal law from being enforced or anything, but the idea is apps/websites don't handle it directly, that's handled by the operating system or app store.
So now - companies like Meta can throw up their hands and say "hey, the operating system told us they were of age, not our fault." It also makes some things murkier. Now if Meta gets sued, can they bring Google/Apple/Microsoft in as some kind of co-defendent?
I think that murkiness is the point. They don't need to create the most bullet-proof set of regulations that 100% absolves them of all responsibility, they just need to create enough to save some money next time they get sued.
I can think of a ton of regulations we could create to better help protect kids. We could mandate that mobile phones, upon first setup, tell the user about parental controls that are available on the device and ask if they'd like to be enabled. Establish a baseline set of parental controls that need to be implemented and available by phone manufacturers, like an approval process that you need to go through to hit store shelves.
We could create educational programs. Remember being in school and having anti-drug shit come through the school? It could be like that but about social media (and also not like that because it wouldn't just be "social media is bad," hopefully).
Again all these laws do is take what should be Meta's burden, and make it everybody else's burden.
In the UK, you cannot use App Store and iPhone (your own phone) without verifying your identity:
and makes more sense, Apple and Google have your credit card , or if you are a parent that bought soem phone for you child then at first boot up as a parent should be your job to setup a child account.
If so, it is customarily permissible to use rhetoric and sarcasm to more strongly emphasize a point. Or, to leave the conclusion as an exercise for the reader.
https://en.wikipedia.org/wiki/Stratton_Oakmont,_Inc._v._Prod....
The laws being passed target exactly the wrong thing that wasn't a problem. They should have been passing "duty to care" laws aimed at social media companies not "give me your age" laws.
I may have missed it, but almost all these laws being passed for this issue have been pretty much solely around data collection rather than modifying the behavior of the worst businesses in the game.
It would be like seeing a car wreck kill a bunch of pedestrians and then passing a law that pedestrians need to carry IDs on them.
By "erasure," I'm not referring to the death of the involved; I'm referring to the elimination of the individual's social capital.
When the privileged lose their ability to influence others, they tend to get rather distressed.
This is really bad for Meta.
These lawsuits and regulations are against the industry, not the users.
The regulations and lawsuits are driving the pressure to ID check users and remove end-to-end encryption.
Absolutely are a lot of gen z who avoid social media, but to pretend most are privately hunkered away is completely ignorant of today's social media usage.
I personally stopped using Facebook because it was annoying me with useless doom and aggressive comments of people on stupid topics. If it would have showed me only cat pictures (like Instagrams does) or reasonable stuff (news, etc.) I would have continued using it.
I call it _anti_social media.
This also isn't helpful, but I think the sudden push of urgency isn't helping. The internet has existed without any kind of age verification or safety measures for about 30 years. We could have used that time to have a sensible conversation about policy trade offs, but instead we've waited till now to decide that everything has to be rushed through with minimal consideration.
I unlurked and made a thread last night, but I think it might be hidden due to account age: https://news.ycombinator.com/item?id=47511919
Meta knowingly hurt children for profit. It worked.
If we are in any way serious about technocratic solutions to social problems, this would be untenable, the company would be bankrupted, a new company would fill its place. No tears would be cried, nothing of value would be lost, half of hacker news would be chafing at the bit to build a better alternative for the newly opened market.
But that's not what happened. We allowed children to be knowingly hurt for profit.
The system is functioning as intended.
That ship has sailed
Now the easiest law change - that wouldn't required anyone to change anything - would be to revoke Section 230. This would make service providers liable. Everything else is a band-aid. I doubt that this verdict will survive appeal (due to Section 230). But if it does, then again there is no need for any new regulations. The tort lawyers will solve the problem for us.
If we do have device age verification, then it still doesn't shield Meta. The lawyers will sue everyone involved, and disclosure will show if Meta had data that will have shown that user should have been blocked.
The purpose of age verification is to avoid all this. Of course the current proposals suck and won't achieve this. The market will not accept an approach that would work - which would be for anything with a screen or speaker to be permanently tied to an individual user. "OS verification" cannot succeed - it must be one-time hardware attestation. Even a factory reset wouldn't remove the user assignment.
where does this "perfect surveillance" idea come from? i teach my children how to get acquaintances; first in more direct, more supervised way, later let them more and more self-driving. like anything else in parenting, eg. bicycle. but i guess urbanization diminished that skill as well. no need for "perfect surveillance"; no parent wants it. it's not only easier to pass on basic principles, but also makes supervision gradually less neccessary over time.
> parents have in fact full control of snail mail
what? children using e-messaging can just as do snail mails completely on their own (of course they don´t but it's not about going back to analogue world but to form the digital world on the same principles). well, i can imagine in highly urbanized environment, where children are forbidden to go outside, but locked down together with family even making them more isolated, and trusting them "to the phone" to cope with the daily frustration, may easily lead to a situation where phone usage and e-messaging is completely unattended and undisclosed by and with parents, while posting an evelope is at a level of expertise for them. parents ability to be in control of e-messaging is as much as of snail mails.
Now we're just moving on to a kind of moral panic think-of-the-kids kind of moment that is thinly-veiled state surveillance.
Where are you seeing that?
The article says:
> Jurors found there were thousands of violations, each counting separately toward a penalty of $375 million. That’s less than one-fifth of what prosecutors were seeking.
> Meta is valued at about $1.5 trillion and the company’s stock was up 5% in early after-hours trading following the verdict, a signal that shareholders were shrugging off the news.
> Juror Linda Payton, 38, said the jury reached a compromise on the estimated number of teenagers affected by Meta’s platforms, while opting for the maximum penalty per violation. With a maximum $5,000 penalty for each violation, she said she thought each child was worth the maximum amount.
How would you actually know this? Facebook is a surveillance company, but they are not omniscient.
I think the framework here is to have community driven age verifiers( i recall there is an EU effort for digital wallets which besides it's bad parts has some of these good parts) which can verify ages for people and link them to( local biometrically encrypted) devices for pinning. This would be privacy preserving. The only downside is a mandate for all devices have a built-in hardware biometric encryption like a finger/face print so phones can't be just(used) with these apps installed.
The verification part is a job that could be done by all the teachers and coaches and ofc parents. Any one verifying identities would be cryptographically nominated/revoked by a number of more senior members of the community. A prent always get the right to say ok for their kid ofc but so could teachers or legal guardians..
We(legally) need a mandate for smart devices to have local device only biometric verification. The law should be to have these apps follow device app store protocols.
Stop trying to gaslight people and think about what you are defending and making excuses for, instead of basically being a conspirator facilitating these vile acts through excusing effectively no consequences. If your daughter was sexually exploited, do you think $5000 would be adequate compensation? Possibly even without covering therapy?
I am not sure about the particulars of this case and I think parents are also largely responsible just like any other criminal negligence case, but that is no excuse to simply let corporations who after all we are told are people, be some kind of superior, special people who are not punished to any even moderately consequential degree as actual, real people. Are they people or not? So they get to commit crimes but also not have actual, real consequences? Just stop and think about what a bunch of nonsense you are promoting.
We actually need a punitive system similar to the individual punishments. That would maybe look like a seizure of a percentage of the company similar to the percentage of one’s life one would spend in prison for a similar act. Yes, it would be a lot if it were, e.g., a 1/3 of the ownership of Facebook (which is easily done by forced issuance of shares), but that would also be the incentive to make sure that you, Facebook, are not facilitating child sexual exploitation.
The current problem with all of our systems is that there are only perverse consequences where the perpetrators of evil benefit and profit from the evil, while everyone else pays the cost. That needs to be flipped.
By coincidence, New Mexico represents 0.6% of America's population.
The internet was not a calm and well behaved place before Facebook arrived. The original “Eternal September” was in the early 90s. Usenet, forums, Reddit, comment sections, and every other social part of the internet have been full of bad behavior long before Facebook came along.
Anyways, is there a "just use vue" effort like there is with postgres :)
That is, BiL was marked as 'spreader for airport grade tar' based on recent activity, marked as having been in contact with spreadee, and then spreadee was marked as having received the spreading. P(conversion) high, so the ad is shown.
It's just contact tracing, it works well and is really easy even without literally watching what goes on in interactions.
They don't have mine.
Even if they did, having a credit card is not proof of age.
> if you are a parent that bought soem phone for you child then at first boot up as a parent should be your job to setup a child account
Setting up a "child account" shouldn't involve setting some age field. Setting up a "child account" should involve restricting permissions.
Why leave it to the OS or a company to decide what is "age appropriate"? Leave it to the parent to decide what the child should or should not have access to. Extra bonus: that same "child account" can then also be used for other restricted purposes. Want a guest account which limits activity? Want an incognito account? Want a sandbox account? None of these should require setting some age.
Something I would be 100% OK with is some regulation that at first boot, you have to present information about what parental controls are available on the device and ask if you'd like them enabled.
I haven't set up a phone in a hot minute, I only do it once every few years, is this something they already do?
I'd imagine there's a lot of cases where a parent buys a new phone and hands down the old one to their kid without enabling safety features. I don't know if there's a good way to help with that - maybe something like, whenever you go to set a new password, prompt "hey is this for a kid?" and go through the safety features again?
Just spitballing, that last one may not be a good idea, not really sure.
There are many interesting ways that the conversation could have been carried forward but there is no way to continue the conservation as the OP doesn't make it clear what they think.
The only thing I can say is: No I cannot figure it out, please tell me what you're trying to say here.
Though I don't see a link to a specific case in either article, I don't think they're separate cases.
they did $200 billion in revenue and $60 billion in net income last year.
a $3 billion fine would be barely more than a slap on the wrist.
I seem to recall someone taking pictures of their baby, naked, because it was sick, and emailing them to the doctor -- and having their Apple account terminated. Terminated, with the father being labeled a pedophile, and the police contacted (all automatically).
Everyone was quite upset. Everyone felt it was too intrusive.
Frankly, communication platforms have no business trying to police anything at all. I wouldn't want the phone company recording all my conversations, hunting for trigger words, and then contacting the police or cutting off my phone if I sad "bad word".
Yet somehow it's OK to have this level of intrusion because.. um "computers".
The state has no business listening in on private citizen's communication.
Corporations have no business doing so.
To protect the 12 year girl, something called "her parents" need to pay attention and watch what she does. That's their job. They're her guardian.
Some random corporation has no business in that. Some random corporation has no business being an 'algorithmic parent', an automated machine with no appeal.
Here's something I'd support -- a way for parents to prevent children from registering for accounts, and, to be able to examine children's accounts.
But... then we get into ID verification. Of course, surely you support ID verification for platforms, because if you support platforms knowing the age of people (40 and 12, you listed), then you therefore must support a way to verify those ages.
Children who are smart enough to get access to a given vice without getting caught are more likely to be mature enough to be able to cope with that vice.
It's really either they can't track you or they will track you.
The ideal scenario would be everyone choosing not to engage with these predatory platforms. Going from there, the right question to me is what steps we have to take as a society for that to become even remotely realistic and, subsequently, what role governments can or should have in that.
For starters, I would be in favor of fines that actually hurt the bottom line instead of this "cost of business" bullshit. We have handed these corporations unprecedented access to and control over our lives, to the point that they erode democracy and the social fabric itself. The inevitable abuse of that power when it comes with barely any strings attached needs to be punished in a way that makes it unattractive as a business model at the very least.
Instead of lowering the attack surface by locking out kids, and in turn introducing mass surveillance which at best also lends itself to abuse, the root issues of ruinous greed and lack of accountability need to be addressed. The whole concept that there is no price too high for profits needs to burn. Social media is just one of the more recent manifestations of it.
There is always a conversation, but it is often not the popular one and gets drown out by whatever everyone is excited about at the moment. You can find it if you seek it out.
Lawrence Lessig’s book “Code” (1999), for example, talks about how a completely unrelated internet is an anomaly, and that regulation will certainly be necessary, and advocates that it be done in a thoughtful manner.
Second best time to plant a tree: now.
“The jury found that Meta was responsible for violating New Mexico's Unfair Practices Act because it misled the public about the safety of its platforms for young users.”
So the penalty is for misleading around CSAM. Not CSAM per se. (My understanding is the latter are still being adjudicated.)
It is actually terrifying . If you write something out of context or upload an image out of context you can be in big trouble.
All imperfect solutions, but they slice original huge problem into much smaller chunks which are easier to tackle with next approach.
Instead we are saying "only adults should use this" which, while technically regulating the industry, places the restriction on users.
We're treating it like tobacco or alcohol (2 industries who have similarly spent millions upon millions of dollars in lobbying efforts) but we should be treating it like asbestos.
That's the flow that California's age verification system uses. Personally, I'm opposed to any age verification beyond the current "pinky promise you're 18" type deals, but California's is the least intrinsically offensive to me.
On HN itself, no way. Too many people here make far too much money on ads to want that. It seems the other part that want freedom also want so much freedom it gives huge corporations the freedom to crush them.
>things than a digital age verification that doesn't track every time you use it.
The big companies that pay the politicians don't want that, therefore we won't get that.
EDIT: I see I'm mixing up the New Mexico case yesterday on sexploitation with the addiction case in Los Angeles I thought we were talking about here.
No it didn’t. That was just like the first free sample from the drug dealer. Give a “good” free service to rope them in, always with the next steps in mind.
Source: I was a bad, bad, boi, on UseNet.
That's the whole point: the word exists precisely as a testament to something that used to exist but now doesn't.
Anybody old enough to remember the word when it was common use should realize that it would have been impossible for the term to be coined in 2026.
If you missed that part of the Internet (maybe you were too young or maybe you were focused on other things, like the vast majority of people in the 90s), that's totally fine, but plenty of us did experience it and remember it pretty clearly.
> Usenet, forums, Reddit, comment sections, and every other social part of the internet have been full of bad behavior long before Facebook came along.
You can tell approximately how old someone is by whether they have reached the "everything sucks" part of life yet or not.
It gets continually worse. Agentic AI is another Eternal September. For example, we now have dimwits sending dozens of unsolicited and unreviewed slop PRs to open source projects. Every search result is an affiliate marketing listicle obviously written by a robot.
Seems such a simple solution rather then each appa nd website having to figure out a way to do it.
I am not paid by a trilion dollar company to decide if it should be a birthday input, or a dropdown where you select your political and religious conviction about what your child should see. Sony figured it out, if Apple pays me I will spend more time to write for them a UX flow so average people could sert the accpunts up and the rest could ask their priest, cousins or other person that can follow instructions to setup the account for them.
The giants shoudl have solved this decades ago and not wait for the fanatic religious to push for this as laws and get the goverments involved, now you will get 25 different laws about this.
What prevents you from saying "Yes, and Xyz!!" and another poster "Yup, and Pdq, and Foo too!"
Or, maybe OP is just being a bit lazy, but again, it seems the context is conversation, not formal scientific inquiry where everything must be falsifiable?
https://www.nytimes.com/2026/03/25/technology/social-media-t...
on this post
NYT: “A Dad Took Photos of His Naked Toddler for the Doctor. Google Flagged Him as a Criminal.” [2022]
It seems a bit silly to think security abstinence is the solution.
LLMs are now heralding the Eternal September of even software engineering, and now I am wondering where to hang up my Techpriest robes in search of more elite pastures.
I wonder if this is how the clergy felt once the vulgar were allowed to study scripture not in the original spiritual programming languages of Hebrew or Latin, but English.
On the contrary, looks like you can:
> (…) sell the user's data (…) use this information to train AI models (…) use this information to serve Ads
No, they literally identified a plausibly sensible policy flag, not some arbitrary action.
These flags are used in literally every system imaginable.
They they don't conform to some hard criteria, to your criteria, or to some working or ideological group's criteria is a bit besides the point.
Every system has these for good reason.
We have laws and regulations for all sorts of things to help people - including children and parents - in a complex society.
"The state has no business listening in on private citizen's communication."
The absolutely do, depending on circumstances. While Facebook is not a place for state monitoring, it's definitely in the public interest if they flag something that is 'very bad' by some reasonable criteria, so that the state can then act if necessary. They do so within the boundaries of the law subject to judicial oversight.
Facebook is a popular social network, a place that they want people to feel imminently safe. It's a Starbucks lounge without coffee - not a 'personal hyper protected zone'.
Other places, such as Signal, Telegram etc. can have different levels of privacy aka e2e given the different offering and expectations of privacy.
Facebook more or less wants to offer a relatively safe place where the kids can hang out, where they know crazy people are not going to attack their kinds. It's a community centre not a hacker zone.
If we can get past that, then we can move onto basic issues of privacy, advertising etc. which are damaging to everyone, especially young people, for which Facebook has perverse incentives.
But just imagine that kids' accounts are coming with restrictions and privileges and when one account is marked as such, accounts marked as adult cannot initiate contact and the kids's data is automatically private, and those accounts cannot be comercialized under any shape or form.
$3m is nothing. 10% of global revenues (not profits) for each year in which this occurred would be something that might actually make them think twice about breaking the law and harming people for money.
Kids with low parental supervision who steal uncle Roy's marlboro are more likely to be able to cope with tobacco addiction?
Do you have any reasons to think this might be the case? Studies, research, a well thought-out article?
Ya know, this might explain why the warnings seem to fall on deaf ears here.
New favorite person on the internet.
I feel like the Myspace/Friendster and early Facebook were nowhere near as harmful (albeit for addiction, those sites were still vulnerable to grooming) as where we are today.
You meant the "vulgus". "Vulgar" has the same root, but a very different meaning.
This random thought is kinda disconnected from actual human history. "Not allowed to study Scripture" was not a thing: Illiteracy was. There were people that knew how to read and people who didn't, that's it.
I'm trying hard (and failing) to visualize your mental image.
"Dear Father: it looks like the Bible has been translated to English by my dear brothers up at the monastery. I'm sure you understand why I can no longer be a priest"
Remember that you're living in the actual earth timeline, not the 40k one.
Though I respect it as a human opinion.
Doing this doesn't accomplish anything in terms of protecting children from the harms of the internet. In fact it feeds your child's age to marketers and child predators.
Every website will get to decide how to handle the age data our devices will now be supplying them. In the case of facebook, it's not as if they had no idea the children endlessly posting selfies and posting "six seven" on their service weren't adults. Facebook was 100% aware that the children using their service were children. They knew what schools those kids went to, who their parents were, which other kids they hung out with. Facebook knew they were children and they took advantage of that fact.
The law California (and other states) passed doesn't define what content has to be blocked for which ages and doesn't give parents any ability to decide what content their children should or shouldn't be allowed to see. It takes control away from parents. As a parent, I might think that my 16 year old should be allowed to look up information on STDs but the websites that collect my child's age could decide they can't and I'll have no say in it.
But, specific to this article and ignoring my personal beliefs - I still find this judgement to be severely lacking. I don't think this judgement is nearly noticeable enough to Meta to actually provide a significant impact on the way they do business outside of tidying up some specifically egregious corners and making sure they internally communicate moving forward in a way that appears to comply with the judgement. The judgement was enough when applied to this pool of users to make these specific users unprofitable in retrospect (e.g. Meta would have more money if it had refused to even do business with these users) but I'm also concerned that the pool of considered victims was so narrow that it excluded a significant number of similarly harmed victims and that the amortized damages end up being negligible.
-emacs user
Capital and tech improvement will beat anyone chasing that.
FWIW, I like the analogy despite seeing a benefit to knowing the original languages to studying scripture.
I disagree. I'm of the Neopets/Pokemon forums generation. Elitism and selectivity were not what made that era a good balance between the caustic free-for-all we have now and the rich kid's playground from before. It was the technical and practical restrictions on what you could put in and get out of a web experience.
You couldn't upload thousands of thirst traps every month, because storage was limited. You couldn't summon another head of the dropshipping or affiliate marketing hydras with a few clicks, because the infrastructure didn't exist. You couldn't inundate users with dark patterns designed to extract every ounce of attention, data, and cash possible, because the rich web wasn't that rich yet.
You had to deal in text and reasonably-sized images on a CRT with a limited-bandwidth pipe feeding it all. Because of this, many of the techniques developed to transform so many other forms of media and so many other institutions into Capitalist hellscapes and high school, respectively, didn't work online. Until they did.
They are taking a position that cannot be argued against or even discussed because they don’t make that position clear.
The absolutely do, depending on circumstances.
So primary is this concept of privacy, that it requires an entire legal framework, evidence of potential wrongdoing, proof that there is no other method to achieve the goal of validating guilt, proof that the crime is severe, and not a hunting expedition, approval via a warrant after a judge has examined that evidence, and strict controls around the entire usage of that warrant.
Wikipedia says:
Lawful interception is officially strictly controlled in many countries to safeguard privacy; this is the case in all liberal democracies.
Using this edge case as "depending on circumstances" is clearly not the generic I was referencing. The statement that
"The state has no business listening in on private citizen's communication."
Is valid, correct, accurate. Listing edge cases, is not invaliding the rule. It is the exception to the rule, and considering the sheer volume of communication, compared to the volume actively tapped in a legal means, it is the most edge case of edge cases.
There is no reason I would deem a mega-corp to somehow be OK to do what I would demand the state not. That our democratic societies have deemed that our states should not.
To highlight that, the phone companies of old would be in infinitely hot water, should they listen to communication between customers, in any fashion.
A platform is not a parent, should not police, should not act as an arm of the state, or as an arm of parents, except as I stipulated, by direct request of the parents, and only to enable the parents to be a guardian. Under no circumstances should that involve the platform scanning anything, instead, the platform could simply give parents direct access to a child's account.
Yet you did imply it, as I said, by mentioning the age of the persons involved.
There is no accurate way to know age, without some form of identity or age verification. Presuming a child will have an account marked "child" is folly, for kids can just sign up without a parent's knowledge, creating a second account. If the goal is to actually protect and be a pseudo parent for the child, then actually ensuring that a child cannot have an adult account is part of that.
My point is, TSA style "we're doing things which look secure, but are not helpful and only inconvenient" isn't going to help. It will only give the appearance, not the actualized result of security.
New York —
A jury on Tuesday found Meta violated New Mexico law in a case accusing it of failing to warn users about the dangers of its platforms and protect children from sexual predators.
The jury found Meta liable on all counts, including for willfully engaging in “unfair and deceptive” and “unconscionable” trade practices, and ordered the company to pay $375 million in damages.
Meta for years has faced concerns about risks to kids and teens on its platforms from parents, whistleblowers, advocates and lawmakers. Tuesday’s decision marks the first time the company has been held accountable in a jury trial for those issues.
A Meta spokesperson said the company “respectfully” disagrees and plans to appeal the decision.
New Mexico Attorney General Raúl Torrez sued Meta in 2023 for allegedly creating a “breeding ground” for child predators on Facebook and Instagram, claims that the company denies. The jury’s award was smaller than the billions in damages New Mexico had sought, but a later portion of the case to be presented directly to the judge could also force Meta to make changes to its platforms and pay additional penalties.
The case is part of a wave of legal pressure Meta and other social media platforms are facing over the safety of young users. As jurors in New Mexico state court delivered a verdict, jurors in Los Angeles are considering a separate case against Meta and YouTube accusing them of intentionally creating addictive features that harmed a young woman’s mental health. Social media giants are also facing hundreds of other cases from individuals, school districts and state attorneys general — some of which are set to go to trial later this year.
Closing arguments on Monday followed a six-week trial that included testimony from Meta executives and former employees-turned-whistleblowers. Details from the attorney general’s undercover investigation into child sexual exploitation on Meta’s platforms, which led to three arrests, were also discussed in the courtroom.
The New Mexico jury was tasked with deciding whether Meta willfully made false and misleading statements about the safety of its platforms or engaged in “unconscionable” practices by knowingly designing its platforms to harm young people.
“We work hard to keep people safe on our platforms and are clear about the challenges of identifying and removing bad actors or harmful content,” the Meta spokesperson said in a statement Tuesday. “We will continue to defend ourselves vigorously, and we remain confident in our record of protecting teens online.”
Torrez called the decision “a historic victory for every child and family who has paid the price for Meta’s choice to put profits over kids’ safety.”
“Meta executives knew their products harmed children, disregarded warnings from their own employees, and lied to the public about what they knew. Today the jury joined families, educators, and child safety experts in saying enough is enough,” Torrez said in a statement Tuesday.
Ahead of the decision, a Meta spokesperson on Monday pointed to an earlier statement saying that the New Mexico lawsuit “makes sensationalist, irrelevant and distracting arguments by cherry picking select documents” and disregarding the company’s “longstanding commitment to supporting young people.”
Meta attorney Kevin Huff argued in court that the company has been honest with users that some bad actors and inappropriate content can slip through its safety filters. But he said 40,000 people at Meta are responsible for making Facebook and Instagram safe, and that the company invests heavily in measures to protect young users.
The New Mexico attorney general’s office created multiple fake Facebook and Instagram profiles posing as children as part of its investigation into Meta. Those test accounts encountered sexually suggestive content and requests to share pornographic content, the suit alleges.
The fake child accounts were allegedly contacted and solicited for sex by the three New Mexico adult men who were arrested in May of 2024. Two of the three men were arrested at a motel, where they allegedly believed they would be meeting up with a 12-year-old girl, based on their conversations with the decoy accounts.
During the trial, the state argued Meta failed to do enough to prevent bad actors on its platforms from contacting kids.

Ex-Meta engineering director-turned-whistleblower Arturo Bejar testified about his efforts to warn Meta executives after he says his own 14-year-old daughter received sexual solicitations on Instagram. And he claimed that the highly personalized algorithms that make Meta’s platforms so successful at serving ads can also benefit predators.
“The product is very good at connecting people with interests, and if your interest is little girls, it will be really good at connecting you with little girls,” Bejar said.
Former Meta Vice President of Partnerships Brian Boland testified that he “absolutely did not believe that safety was a priority” to CEO Mark Zuckerberg and then-COO Sheryl Sandberg when he left the company in 2020. Instagram head Adam Mosseri, conversely, testified that Meta has rolled out safety features such as Teen Accounts despite their negative impact on growth and engagement.
The New Mexico case also raised concerns that allowing teens to use end-to-end encryption on Instagram chats — a privacy measure that blocks anyone other than sender and receiver from viewing a conversation — could make it harder for law enforcement to catch predators. Midway through trial, Meta said it would stop supporting end-to-end-encrypted messaging on Instagram later this year.
Regarding the encryption decision, a Meta spokesperson told CNN that, “very few people were opting in to end-to-end encrypted messaging in DMs, so we’re removing this option from Instagram in the coming months. Anyone who wants to keep messaging with end-to-end encryption can easily do that on WhatsApp.”
A Meta spokesperson previously told CNN that “child exploitation is a horrific crime and we’ve spent years building technology to combat it.” Meta’s Head of Child Safety Policy Ravi Sinha testified about the company’s work with law enforcement to prevent and report instances of child exploitation.
The company’s lawyers questioned the legitimacy of the New Mexico investigation, accusing the attorney general’s office of using hacked or stolen accounts and photos of real, non-consenting children to lure predators. Meta spokesperson Andy Stone called it “ethically compromised” in a series of posts on X last month.
Torrez previously called those criticisms a “distraction.”
“One of the most common things is to lash out and try and attack an investigation, rather than to really focus on their own accountability,” he told CNN Monday. “I don’t think it’s something that the jury is really going to fall for.”
This story has been updated with additional updates and developments.
Forcing the users to verify their age changes nothing. It gives the illusion of "doing something" but it just gives facebook data they already had. What's still needed is regulating social media platforms themselves to place explicit limits on what they can do to hurt their users, including children.
Just because I don't know how to write a law that can prevent it doesn't mean that I can't recognize an actual issue when I see it.
No, but it's a framework that would allow other laws to do so. Because...
> it's not as if they had no idea the children endlessly posting selfies and posting "six seven" on their service weren't adults.
...you can make statements like that which sound like common sense, but it would be incredibly hard to regulate based on "if you know, you know" (or "you should have known"/"you had to have known"). The law has to provide (guarantee) a way for them to know in order to actually require them to take action based on it.
> As a parent, I might think that my 16 year old should be allowed to look up information on STDs but the websites that collect my child's age could decide they can't
This is a different problem. It sounds like you're essentially wanting to guarantee access to certain things, not just for your own 16-year-old, but for everyone else's, too (because if it was just yours, you could look it up for/with them if necessary). It'd be difficult to compel businesses to provide services to audiences they don't want to. But again, that's a separate problem that doesn't necessarily conflict with the rest of the system.
As I've aged, I've entered new-to-me territory where a good society needs to reflect the world as it is, so that its members have high survivability.
At the local family level for instance. When my kids were young. I had dreams of being super financially successful so that I could give them lots of nice things. I just don't want that for them anymore. Protection, and pandering, does not make a good lineage IMO. It's something of a leap I'm asking of you to connect this to my position here on Meta, but I've got other work to do, and I hope it's enough to convey my point.
So one of your suggestions of what the OP could mean was something you explicitly don’t think is true and would argue against? That sounds like a bad faith straw man set up.
Perhaps it’s just as well that the OP didn’t provide one specific reason to be nitpicked ad nauseam by an army of “well ackshually” missing the forest for the trees.
You could, as the HN guidelines suggest, argue in good faith and steel man. The distinction between “selling your data” and “profiting from your data” isn’t important for a high level discussion.
Can you truly not see through Meta’s intentions? There are entire published books, investigations, and whistleblowers to reference. Zuckerberg called people “dumb fucks” for trusting him with their data and has time and again proven to be a hypocrite who doesn’t care about anyone but himself.
Will literally never happen. It's impossible. I'm not talking figuratively impossible. At his level of wealth and influence, there are good odds he could murder someone on live stream and walk away. You are dangerously underestimating the influence the rich have in every aspect of society and law.
No it doesn't.
Life is no Reddit, lawyers and technicalities.
It's made up of regular people in communities.
If you see some guy creeping on 10 year-olds, you can notify the police and Facebook will do that as well - for the same reason.
It may not at all need to involve 'state surveillance', and Meta can probably hand over whatever they want to the police in that circumstance.
The police can make a decision as to how to proceed.
A bit like if someone was harassing someone on the street.
Or if an unknown person starts hanging out outside by a schoolyard in a way that seems inappropriate.
We don't want to transgress people's rights but we also are going to look at 'negative signals'.
It's all Trump style "believe me I know how to fix it" and you will vote for the person that pushes your buttons regardless of whether they have a plausible solution or not.
I worry that's it's the start of a lot of "other laws" which will limit the ability for children and adult's to maintain even pseudo-anonymity online.
> The law has to provide (guarantee) a way for them to know in order to actually require them to take action based on it.
That sounds like an argument for even stronger proof of age than what the law calls for. Online platforms should do what nearly every other publisher does and provide a rating for their content. Netflix doesn't need to know how old I am. They provide a "kids" profile populated with their own curated content if that's the kind of thing I want and for everything else they provide ratings (PG, R, TV-14, etc.) It would be easy enough to push a rating to clients, they could even use HTTP headers for it. If lawmakers really felt the need to interfere in all of our operating systems it could require some means to collect and act on those ratings.
> It'd be difficult to compel businesses to provide services to audiences they don't want to.
This is the norm. It's what every business does apart from those who demand ID for every transaction. It's useful for businesses to give people their opinion or intention for who they're targeting, but it's entirely inappropriate for every website and online service to force their opinion onto others. They aren't qualified to know what's appropriate for a specific child and platforms like facebook have repeatedly demonstrated that they absolutely can't be trusted to put our children's interests above their own.
Your comment has the effect of being flippant, condescending, and seemingly callous to the subject matter. When called out, you have backed up to an alternative explanation which is, again, massively condescending (I don't need channeling mate, certainly not from you).
You have not engaged with the content in a good faith manner.
So, standing back and looking at your comment in terms of its effects rather than what it claims to be its effects (AND the effect that making those secondary claims have - doubling down on condescension), it looks more like you're trying to bully me into changing my behaviour and viewpoint without meaningfully engaging with the content.
Ironically, I'm feeling psychological reactance, so your comments polarized me against you (see the Backfire Effect) and deepened my convictions.
I won't engage with bullies any further but to call them out, I'm hesitant to bring the conversation down to this level and give you any kind of air to begin with, but I think it's important to analyze discourse as it happens.
That is a decision you had the freedom to make for yourself and your family. In this case, the millions of children didn’t get to make that choice and meta knowingly exploited that. I hope you see our point of view as to why meta doesn’t get the benefit of doubt here.
Let's say a 40 y/o man finds a phone on the ground, sees a name stuck on it, googles "name + town" and finds the facebook of a 12 y/o girl, and messages "Hey I found this phone, do you recognize it? <photo>"
With e2e encryption, you can't easily tell the difference between that and a creep.
This thread is advocating that exactly that case should result in a police visit with the assumption of guilt.
"The state has no business listening in on private citizen's communication."
So yes, the concept of privacy is so primary that the it requires an entire legal framework for the state to listen in.
--
In terms of the rest of your post, even though you quoted out of context, what you're saying is fine. But the people noticing things on the street, have nothing to do with those who maintain the roads. You really don't want corporations to have algorithms which mean they have to report trigger words to the police or state.
Instead, as I said, empower the parents. Legal guardians. It's their job to watch.
and, an adult can mark itself as a child.
That only happens to "publications" of particular forms where state regulation has mandated it, or enough noise was made about state regulation mandating it (or simply censoring content) was made that the industry adopted a rating system as a way to discourage that (and in the latter case, there are always plenty of publishers that don't make use of the industry rating system, either at all or at least for selected publications in the field to which the ratings nominally apply.)
> They provide a "kids" profile populated with their own curated content if that's the kind of thing I want and for everything else they provide ratings
Netflix does not provide ratings for "everything else". Most of what they carry has either MPAA or TV Parental Guidelines ratings, and if it has such ratings they provide them. But they have content which does not have such ratings, which is simply noted as not being rated. (Of course, if "not rated" as an option is a valid to comply with your "you must have ratings in an HTTP header" law HTTP header, then it is trivial to comply and provide the "not rated" header for every piece of content, but this doesn't actually achieve anything.)
That's fine, but it needs an enforcement mechanism, or we're back to where we currently are ("click here if you're 18").
> It would be easy enough to push a rating to clients, they could even use HTTP headers for it. If lawmakers really felt the need to interfere in all of our operating systems it could require some means to collect and act on those ratings.
I would completely agree it seems reasonable at a glance to have websites push ratings and have the enforcement be done e.g. at the web browser level (with the web browser knowing how to enforce based on the OS's supplied age bracket), rather than making websites read the age bracket and act on it directly. Although it does still run into questions about how you handle websites with content from multiple brackets (like Reddit or X)-- what's the UX supposed to look like if a child attempts to access adult content on one of those platforms? If the platform can't know what's happening (due to your privacy/safety concerns), then you're limited to the web browser entirely breaking the interaction or somehow redirecting them somewhere else.
Imagine no e2e for a moment for FB. Policy can be smart enough to pick up that this communication is not represntative or normal. That's part of detection.
Second, a single message to someone on a random phone is not going to flag anything.
Third - there is no assumption of guilt. Not even an arrest is assumption of guilt.
Finally - those are extraordinary corner cases. They will happen, but the get resolved the moment the guy says 'oh, I found this phone' - because that will be 100% clear in that context.
Obviously - things can go awry. Meta flag something as bad, sends it to police - they do not follow procedure, or don't apply something correclty and arrest a guy at his place of work. But in the scenario you described, its literally not a problem - there are 'common sense checks' through the whole thing. The algo, the human making the notification to the police, the police, the judge if a warrant is required. People are not going to be arrested because they found a phone and texted their niece - if that happens, then we have another set of problems.
We can 100% have our 'friendly community' with Facebook.
Now - with an e2e thing like Signal, well, yes, it could theoretically be a problem, but the likelihood of some rando finding a phone, that's not locked, and being able to text some other 12 year old, an effectively 'pose' as their 'contact' - well that's a rare case scenario.
They already do.
The entire financial system, all of social media, and many organizations past a certain size.
I did not quote out of context - the commenter was missattributing context.
It'd be dead simple to tell if a website returned a rating or not, just pull the http headers and if it isn't there fine them or warn them first and then fine them or whatever. You could even have browsers just refuse to load pages that didn't include a rating header in their response and enforcement would take care of itself.
> it does still run into questions about how you handle websites with content from multiple brackets
I think it'd be up to reddit (or mods) to either set ratings for each subreddit and moderate accordingly. Pages at /r/MsRachel/ would return a different rating than /r/watchpeopledie.
Same with twitter I guess. Every user can specify if their account was intended for children or not. Elmo's twitter account would be shown to everyone, while accounts that don't intend to self-censor wouldn't.
> what's the UX supposed to look like if a child attempts to access adult content on one of those platforms?
browsers that detect a rating higher than authorized can just throw up an about:blocked page telling kids to talk to their parents for access to the page they wanted or click the back button to return to the page they were on.
The platforms would see that a page was requested, and they'd transmit the data to the client along with the rating header. They wouldn't get any signal that the page was blocked. It'd look no different on the server side than it would if the user had clicked a link and then closed their browser/tab/window. If you wanted to be sneaky, you could actually have the browser load the page in the background to avoid platforms guessing between a closed tab and blocked access.
This not only solves the privacy/safety concerns, most importantly it puts parents back in control of what their children can access. Parents would even be able to run software that would log the times/urls of blocked pages, and let them override a rating based on URL or domain. Parents could block roblox.com even though it returns a "for kids" header if they didn't want their 8 year old playing in an ad infested online pedo playground but still allow their mature 10 year old access to plannedparenthood.org even though it has an adult rating without exposing them to adult everything else on the internet.
There are countless better alternatives to what facebook wants us all to be subjected to, but facebook couldn't care less about our interests they are only looking out for themselves and lawmakers are happy to take their bribes and eager to erode our ability to browse without an ID attached to our every action.
And some things are reported, others are not, point being, yes E2E isn't reported for obvious reasons. Loads of stuff isn't reported on social media; in fact, that's the absurd complaint against Meta!
And regardless of what is done now, that doesn't mean we want it. I didn't say it is or isn't done, I said "You really don't" want that. The more encroachment in that realm, the less free a people are.
We 100% absolutely do want 'basic surveillance' on many systems, and it's not even an argument.
It's like saying 'We shouldn't have police, because they are oppressive!' and assuming things would just carry on and not go to pot.
It's a wild assertion.
Formally - the entire financial system is about attribution, fraud, monitoring and security.
That's probably more than 1/2 of the function.
Your money would not be safe if your bank didn't have good controls, or if we did not have good regulations around those functions.
It's why if you send > $10K overseas, it gets flagged. We generally want this, though obviously within a regulated context.
Less formally, we absolutely, 100% do want the 'Starbucks employees' to have enough common sense to call the police or to flag something if there is some creepshow doing something that may be 'legal' but is obviously not appropriate - within reason.
Starbucks has not only 'policy' around behaviour, but also we have 'common sense' as a society.
It's not even remotely contentious that Starbucks is both private property and can set some 'terms' , but that it's also a regular community locale, with social conventions.
Just as Facebook - and many (most places) like that are 'community hang outs' - subject to regular social conventions, established by the 'owners'.
They're not 'no-identity-hacker-zones' for folks to publish their freak-ware or whatever, with ultra privacy guarantees.
Conversely - yes - it's just as important that if people want to establish their 'hacker-zones' - they can do that. That's important. And obviously Facebook has to be subject to some minimal privacy regulations.
But most places will have some degree of social overview (like literally the grocery store would have) and 'that's normal' in any civil society.
It's already pervasive because it's impossible to have basic social function without them.
Read the story about the former Twitter CEO who talks about this kind of thing pre-Elon Musk. 'Moderation' is most of the job and by far the hardest thing. We think of it as 'back end systems' it has almost nothing to do with that. It's the 'social' part of the 'social network' that's the key part. Moderation.
"As with smoking, alcohol, sex, drugs etc
Children who are smart enough to get access to a given vice without getting caught are more likely to be mature enough to be able to cope with that vice."
There are at least two problems here. The one I've focused on first that you seem so keen to dispel, is an assumption that there are smart kids overcoming a challenge. 'Roy' is an extreme, but there is a whole spectrum of low-oversight conditions that are likely to lead to kids getting access to alcohol, tobacco, drugs, having sex etc, which are nothing to do with smartness or challenges and are much more to do with shitty parenting and neglect.
Then there's the second problem. Let's focus on tobacco but I believe it's likely to hold for other drugs - even if we allow that children getting access to tobacco are 'smarter' than those who don't figure it out, and are overcoming various obstacles, that doesn't actually imply that they'll be better able to deal with the consequences. Just like how a high IQ doesn't always mean someone is necessarily good at crossing the road safely or tieing their shoelaces.
In fact there's a variety of research about nicotine's effect on developing brains and how the earlier people are exposed the more likely they are to be more addicted for longer. This is the opposite outcome to the original claim, kids who start earlier are in fact demonstrably less likely to be able to 'cope' with the vice.
The whole claim is nonsense.
[0] https://pmc.ncbi.nlm.nih.gov/articles/PMC3615117/ [1] https://www.tobaccoinaustralia.org.au/chapter-6-addiction/6-...
(edit - I'm not making specific claims about cybersecurity or access to tech here, I just think the analogy is pretty seriously wrong in itself)
Let's consider the four combinations of the two variables here. You have dumber and smarter kids, and worse and better parents. The kids with the worse parents will have access to the vice regardless of whether they're dumb or smart, but the kids with the better parents will only have access if they're smart enough to figure out how against parents actively trying to prevent it. Therefore the two of the four quadrants with smarter kids can get access but the dumber kids only can when they have worse parents, implying that two thirds of the quadrants with the ability to do it are the smarter kids.
> even if we allow that children getting access to tobacco are 'smarter' than those who don't figure it out, and are overcoming various obstacles, that doesn't actually imply that they'll be better able to deal with the consequences.
That's assuming the way they deal with it better is by trying the drug and then somehow not getting addicted rather than by choosing not to try the drug to begin with even though they could access it if they wanted to, or otherwise making more measured choices if they do decide to try something, like finding a source more likely to be providing the expected amount of the expected substance instead of who knows how much of who knows what. Or just hesitating a while so their first time comes at an older age.
But only one of those involves overcoming anything.
And unless you have information on the relative sizes of those quadrants, it’s meaningless in terms of the overall picture and being able to confidently assert that access to such contraband allows you to draw any inferences about intelligence whatsoever.
And the rest appears to be some serious mental gymnastics to avoid the point, which I don’t believe for a second was meant to encompass “children who are smart enough to get access to do a thing but don’t actually do the thing because they’re so damn smart”. Nor do I believe that 14 year olds who find a willing drug dealer are more likely to take sensible precautions than their peers, having proven their smarts by finding one!
The whole premise is laughable.