[Edit: spelling sigh]
Now articles from organizations with legitimate journalists and fact checkers like the NYT, WSJ, or the economist will need an “AI generated” badge because they used an AI assistant and they have risk adverse legal departments. This will be gleefully pointed out by every brain dead Twitter conspiracy theorist, Breitbart columnist, 911 truther substack writer, and Russian spam bot as they happily spew unbadged drivel out into the world. Thanks so much New York!
AI doesn’t make bad news content. Complete disregard for objective reality does. I’ll take an ai assisted human that actually cares about truth over an unassisted partisan hooligan every time.
If this is the best our legislatures can come up with we are so utterly fucked…
IMO: it's already too late and effort should instead be focussed on recognition of this and quickly moving on to prevention through education instead of trying to smother it with legislation, it is just not going away.
>The use of generative artificial intelligence systems shall not result in: (i) discharge, displacement or loss of position
Being able to fire employees is a great use of AI and should not be restricted.
> or (ii) transfer of existing duties and functions previously performed by employees or worker
Is this saying you can't replace an employee's responsibilities with AI? No wonder the article says it is getting union support.
Even beyond AI, the vast majority of news is re-packaging information you got from somewhere else. AI can replace the re-writers, but not the original journalists, people who spoke to primary sources (or who were themselves eyewitnesses).
Any factual document should reference its sources. If not, it should be treated skeptically, regardless of whether AI or a human is doing that.
An article isn't automatically valueless just because it's synthesized. It can focus and contextualize, regardless of whether it's human or AI written. But it should at the very least be able to say "This is the actual fact of the matter", with a link to it. (And if AI has hallucinated the link, that's a huge red flag.)
Step 3: regulator prohibits putting label on content that is not AI generated
Step 4: outlets make sure to use AI for all content
Let's call it the "Sesame effect"
So clawdbot may become a legal risk in New York, even if it doesn't generate copy.
And you can't use AI to help evaluate which data AI is forbidden to see, so you can't use AI over unknown content. This little side-proposal could drastically limit the scope of AI usefulness over all, especially as the idea of data forbidden to AI tech expands to other confidential material.
It also doesn't work to penalize fraudulent warnings - they simply include a harmless bit of AI to remain in compliance.
> substantially composed, authored, or created through the use of generative artificial intelligence
The lawyers are gonna have a field day with this one. This wording makes it seem like you could do light editing and proof-reading without disclosing that you used AI to help with that.
[0] https://en.wikipedia.org/wiki/1986_California_Proposition_65
It's kinda funny the oft-held animosity towards EU's heavy-handed regulations when navigating US state law is a complete minefield of its own.
To emphasize this: it's important that the organization assume responsibility, just as they would with traditional human-generated 'content'.
What we don't want is for these disclaimers to be used like the disclaimers of tech companies deploying AI: to try to weasel out of responsibility.
"Oh no, it's 'AI', who could have ever foreseen the possibility that it would make stuff up, and lie about it confidently, with terrible effects. Aw, shucks: AI, what can ya do. We only designed and deployed this system, and are totally innocent of any behavior of the system."
Also don't turn this into a compliance theatre game, like we have with information security.
"We paid for these compliance products, and got our certifications, and have our processes, so who ever could have thought we'd be compromised."
(Other than anyone who knows anything about these systems, and knows that the stacks and implementation and processes are mostly a load of performative poo, chosen by people who really don't care about security.)
Hold the news orgs responsible for 'AI' use. The first time a news report wrongly defames someone, or gets someone killed, a good lawsuit should wipe out all their savings on staffing.
One approach we’ve been exploring is turning high-stakes AI outputs (like news summaries or classifications) into consensus jobs: multiple independent agents submit or vote under explicit policies, with incentives and accountability, and the system resolves the result before anything is published. The goal isn’t “AI is right,” but “this outcome was reached under clear rules and can be audited.”
That kind of structure seems more scalable than adding disclaimers after the fact. We’re experimenting with this idea on an open source CLI at https://consensus.tools if anyone’s interested in the underlying mechanics.
That at might at least offer an opportunity for a news source to compete on not being AI-generated. I would personally be willing to pay for information sources that exclude AI-generated content.
The web novel website RoyalRoad has two different tags that stories can/should add: AI-Assisted and AI-Generated.
Their policy: https://www.royalroad.com/blog/57/royal-road-ai-text-policy
> In this policy, we are going to separate the use of AI for text, into 3 categories: General Assistive Technologies, AI-Assisted, AI-Generated
The first category does not require tagging the story, only the other two do.
> The new tags are as such:
> AI-Assisted: The author has used an AI tool for editing or proofreading. The story thus reflects the author’s creativity and structure, but it may use the AI’s voice and tone. There may be some negligible amount of snippets generated by AI.
> AI-Generated: The story was generated using an AI tool; the author prompted and directed the process, and edited the result.
Some may find it surprising that this is left over from the Sun's early support from the crypto journalism project Civil.
Status Quo Bias is a real thing, and we are seeing those people in meltdown with the world changing around them. They think avoiding AI, putting disclaimers on it, etc... will matter. But they aren't being rational, they are being emotional.
The economic value is too high to stop and the cat is out of the bag with 400B models on local computers.
Can you elaborate on this?
Totally agree with you: all newspapers should cite sources. What’s silly to me is how selectively people care—big outlets get to hand-wave the “trust me” part even when a piece is basically a lightly rewritten press release, thinly sourced, or reflecting someone’s incentives more than reality.
I think the reason is that most people don't believe, at least on sufficiently long times scales, that legacy states are likely to be able to shape AI (or for that matter, the internet). The legitimacy of the US state appears to be in a sort of free-fall, for example.
It takes a long time to fully (or even mostly) understand the various machinations of legislative action (let alone executive discretion, and then judicial interpretation), and in that time, regardless of what happens in various capitol buildings, the tests pass and the code runs - for better and for worse.
And even amidst a diversity of views/assessments of the future of the state, there seems to be near consensus regarding the underlying impetus: obviously humans and AI are distinct, and hearing the news from a human, particular a human with a strong web-of-trust connection in your local society, is massively more credible. What's not clear is whether states have a role to play in lending clarity to the situation, or whether that will happen of the internet's accord.
If not: I suspect fewer people may care and so what's the point of the label?
If so: why would they continue to use Ai solely to clean up photos?
Which by the way if you ever want to get in the paper that's how, it's super easy. AI will help you learn how to write in the right tone/voice for news if you don't know how.
Step 1: those outlets that actually do the work see an increase in subscribers.
Even though it has been instructed to maintain privacy between people who talk to it, it constantly divulges information from private chats, gets confused about who is talking to it, and so on.^ Of course, a stronger model would be less likely to screw up, but this is an intrinsic issue with LLMs that can't be fully solved.
Reporters absolutely should not run an instance of OpenClaw and provide it with information about sources.
^: Just to be clear, the people talking to it understand that they cannot divulge any actual private information to it.
Step 2.5: 'unlike those news outlets, all our work is verified by humans'
Step 3: work as intended.
But i wouldn't be surprised to see a massive % of comments that I don't instantly attribute to AI, actually being AI. RP prompts are just so powerful, and even my local mediocore model coulda wrote 100 comments in the time its taking me to write this one.
all humans are pattern seeking to a fault, the amount of people even in this community that will not consider something AI generated just because it doesnt have emdashes or emojis is probably pretty high.
I think you're saying "AI" written content having a certain feel that just seems off is obviously "AI" written content.
Yes. But you've know way of knowing that's most. There could be 10x more that we don't detect.
How would you classify fraudulent warnings? "Hey chatgpt, does this text look good to you? LGTM. Ship it".
I don’t like AI slop but this kind of legislation does nothing. Look at the low quality garbage that already exists, do we really need another step in the flow to catch if it’s AI?
You legislate these problems away.
Economic value or not, AI-generated content should be labeled, and trying to pass it as human-written should be illegal, regardless of how used to AI content people do or don't become.
With that attitude we would not have voting, human rights (for what they're worth these days), unions, a prohibition on slavery and tons of other things we take for granted every day.
I'm sure AI has its place but to see it assume the guise of human output without any kind of differentiating factor has so many downsides that it is worth trying to curb the excesses. And news articles in particular should be free from hallucinations because they in turn will cause others to pass those on. Obviously with the quality of some publications you could argue that that is an improvement but it wasn't always so and a free and capable press is a precious thing.
When your mind is so fried on slop that you start to write like one.
> The economic value is too high to stop and the cat is out of the bag with 400B models on local computers.
Look at all this value created like *checks notes* scam ads, apps that undress women and teenage girls, tech bros jerking each other off on twitter, flooding open source with tsunami of low quality slop, inflating chip prices, thousands are cut off in cost savings and dozens more.
Cat is out of the bag for sure.
I'm a data journalist, and I use AI in some of my work (data processing, classification, OCR, etc.). I always disclose it in a "Methodology" section in the story. I wouldn't trust any reporting that didn't disclose the use of AI, and if an outlet slapped a disclaimer on their entire site, I wouldn't trust that outlet.
Like how California's bylaw about cancer warnings are useless because it makes it look like everything is known to the state of California to cause cancer, which in turn makes people just ignore and tune-out the warnings because they're not actually delivering signal-to-noise. This in turn harms people when they think, "How bad can tobacco be? Even my Aloe Vera plant has a warning label".
Keep it to generated news articles, and people might pay more attention to them.
Don't let the AI lobby insist on anything that's touched an LLM getting labelled, because if it gets slapped on anything that's even passed through a spell-checker or saved in Notepad ( somehow this is contaminated, lol ), then it'll become a useless warning.
Does photoshop fall under this category?
So many words to say so little, just so they can put ads between every paragraph.
None of these things were rolling back a technology. History shows that technology is a ratchet, the only way to get rid of a technology is social collapse or surplanting the technology with something even more useful or at the very least approximately as useful but safer.
Once a technology has proliferated, it's a fiat accompli. You can regulate the technology but turning the clock back isn't going to happen.
People have been writing articles without the help of an LLM for decades.
You don't need an LLM for grammar and spell checking, arguably an LLM is less efficient and currently worse at it anyway.
The biggest help a LLM can provide is with research but that is only because search engines have been artificially enshitified these day. But even here the usefulness is very limited because of hallucinations. So you might be better off without.
There is no proof that LLMs can significantly improve the workflow of a professional journalist when it comes to creating high quality content.
So no, don't believe the hype. There will still be enough journalists not using LLMs at all.
Californians have measurably lower concentrations of toxic chemicals than non-California's, so very useless!
https://www.washingtonpost.com/climate-solutions/2025/02/12/...
> The study, published Wednesday in Environmental Science & Technology, found that California’s right-to-know law, also known as Proposition 65, has effectively swayed dozens of companies from using chemicals known to cause cancer, reproductive harm or birth defects.
...
> Researchers interviewed 32 businesses from a variety of sectors including personal care, clothing and health care, concluding that the law has led manufacturers to remove toxic chemicals from their products. And the impact is significant: 78 percent of interviewees said Proposition 65 prompted them to reformulate their ingredients; 81 percent of manufacturers said the law tells them which chemicals to avoid; 69 percent said it promotes transparency about ingredients and the supply chain.
Editing and proofreading are "substantial" elements of authorship. Hope these laws include criminal penalties for "it's not just this - it's that!" "we seized Tony Dokoupil's computer and found Grammarly installed," right, straight to jail
You'd lose a lot of valid sourcing if you made this a requirement. For example, the Catholic Church scandal investigation would never have seen the light of day if the key legal sources corroborating the story had to give up their identity as part of the process. Speaking off the record is often where a lot of those kinds of stories come together.
And the reaction around the world to that story, the thousands of victims that came forward, resoundingly confirmed what people were saying on background.
Low-effort content mills will never, ever care enough to generate more accurate, consensus-based output, especially if it adds complexity and cost to their workflows.
> That kind of structure seems more scalable than adding disclaimers after the fact.
Not if your goal as a business is to churn out slop as fast and cheaply as possible, and a whole lot of online content is like that. A disclaimer is warranted because you cannot force everyone to use the kinds of approaches that you're talking about. A ton of people who either don't know or don't care what they're putting out will inevitably exist.
One of the most persistent and also the dumbest opinion I keep seeing both among laymen and people who really ought to know better is that we can solve the deepfake problem by mandating digital watermarks on generated content.
The downside to having labels on AI-written political comments, stellar reviews of bad products, speeches by a politician, or supposed photos of wonderful holiday destinations in ads targeted at old people are what, exactly?
Are you really arguing that putting a label on AI generated content could do more harm than just leaving it (approximately) indistinguishable from the real thing might somehow be worse?
I'm not arguing that we need to label anything that used gen AI in any capacity, but past the point of e.g. minor edits, yeah, it should be labeled.
I guess you have to disclose every single item on your new site that does anything like this. Any byte that touches a stochastic process is tainted forever.
This is very predictably what's going to happen, and it will be just as useless as Prop 65 or the EU cookie laws or any other mandatory disclaimers.
Because no one believes these laws or bills or acts or whatever will be enforced.
But I actually believe they'll be. In the worst way possible: honest players will be punished disproportionally.
IMO, It’s a much tougher problem (legally) than protecting actors from AI infringement on their likeness. AI services are easier to regulate.. published AI generated content, much more difficult.
The article also mentions efforts by news unions of guilds. This might be a more effective mechanism. If a person/union/guild required members to add a tagline in their content/articles, this would have a similar effect - showing what is and what is not AI content without restricting speech.
Plus if you want to mandate it, hidden markers (stenography) to verify which model generated the text so people can independently verify if articles were written by humans (emitted directly by the model) is probably the only feasible way. But its not like humans are impartial anyway already when writing news so I don't even see the point of that.
Also it's objectively very low on violent crime, and the problems you talk about are in every PNW city, i.e. SF, Portland, Vancouver (WA), Vancouver (BC), Seattle. They're also the places where all the innovation, including AI (despite the non SF cities hatred for it), is happening.
And usually the general public does not have a direct stake in the outcome (ok, maybe broadcast spectrum regulation should be mentioned there), but this time they do and given what's at stake it may well be worth trying to define what a good set of possible outcomes would be and how to get there.
As I mentioned above and which TFA is all about, the press for instance could be held to a standard that they have shown they can easily meet in the past.
A new bill in the New York state legislature would require news organizations to label AI-generated material and mandate that humans review any such content before publication. On Monday, Senator Patricia Fahy (D-Albany) and Assemblymember Nily Rozic (D-NYC) introduced the bill, called The New York Fundamental Artificial Intelligence Requirements in News Act — The NY FAIR News Act for short.
“At the center of the news industry, New York has a strong interest in preserving journalism and protecting the workers who produce it,” said Rozic in a statement announcing the bill.
A closer look at the bill shows a few regulations, mostly centered around AI transparency, both for the public and in the newsroom. For one, the law would demand that news organizations put disclaimers on any published content that is “substantially composed, authored, or created through the use of generative artificial intelligence.”
AI disclaimers for readers have been hotly debated in the news industry, with some critics arguing that such labels alienate audiences, even when generative AI is only used as an assistive tool. The bill contains a carve-out that would allow copyrightable material to be excluded from the law. (The U.S. Copyright Office has ruled that works solely generated by AI systems are not eligible for copyright, but allows leeway for works that show signs of “human authorship.”)
The bill also requires that news organizations disclose to journalists and other media professionals in their newsrooms when AI is being used and how. Any news content created using generative AI must also be reviewed by a human employee “with editorial control” before publication. That goes not just for news articles but also for audio, images, and other visuals.
In addition, the bill contains language that requires news organizations to create safeguards that protect confidential material — mainly, information about sources — from being accessed by AI technologies.
State lawmakers highlighted two main reasons for proposing the NY FAIR News Act. First, they say, AI-generated content may be “false or misleading.” Second, they argue, AI-generated content “plagiarizes” by deriving content from original sources “without permission or proper citation.”
“Perhaps one of the industries at most risk from the use of artificial intelligence is journalism and as a result, the public’s trust and confidence in accurate news reporting,” said Sen. Fahy in a statement. “More than 76% of Americans are concerned about AI stealing or reproducing journalism and local news stories.”
The proposed bill was announced with broad endorsements from unions across the news industry, including WGA-East, SAG-AFTRA and the DGA.
Jennifer Sheehan, a spokesperson for the NewsGuild of New York, confirmed that the NewsGuild has been meeting with this labor coalition to discuss shared concerns around AI adoption and working to get the bill off the ground.
Notably, the bill would cement some labor protections for newsroom workers — including restrictions on firing journalists or reducing their work, pay, or benefits due to generative AI adoption. Similar language has been negotiated into individual newsroom union contracts across the country over the past couple of years.
In December, the NewsGuild launched a nationwide campaign called “News Not Slop” to advocate for more guardrails on AI usage in newsrooms. In New York City, the Business Insider union held a rally in the Financial District to protest an editorial pilot that was publishing AI-generated news stories with an “AI byline.”
“Our union is deeply concerned about media companies implementing artificial intelligence in ways that damage the credibility of our members’ journalism,” Sheehan said, “as well as the impact such technology has had and will have on jobs.”
Photo of New York State Capitol building in Albany, NY by Lee and used under Adobe Stock license.
Show tags
They already believe that and it’s used to keep us fighting each other.
Many people here love SV hackers who have done the impossible, like Musk. Could you imagine this conversation at an early SpaceX planning meeting? That was a much harder task, requiring inventing new technology and enormous sums of money.
Lots of regulations are enforced and effective. Your food, drugs, highways, airplane flights, etc. are all pretty safe. Voters compelling their representatives is commonplace.
It's right out of psyops to get people to despair - look at messages used by militaries targeted at opposing troops. If those opposing this bill created propaganda, it would look like the comments in this thread.
1) Anyone can join the W3C group; you don't need to be a formal member of W3C!
2) What's dumb about the proposal itself? How could it better achieve its goals?
3) You can see some dialogue at https://github.com/WICG/proposals/issues/261 - what resonates and doesn't in the feedback and critique?
However the technology is nonetheless here to stay, until it's replaced with something better.
As with everything else BigCo with their legal team will explain to the enforcers why their "right up to the line if not over it" solution is compliant and mediumco and smallco will be the ones getting fined or being forced to waste money staying far from the line or paying a 3rd party to do what bigco's legal team does at cost.
Either you generated it with AI, in which case I can happily skip it, or you _don't know_ if AI was used, in which case you clearly don't care about what you produce, and I can skip it.
The only concern then is people who use AI and don't apply this warning, but given how easy it is to identify AI generated materials you just have to have a good '1-strike' rule and be judicious with the ban hammer.
Time will tell. Texas' sat on its biometric data act quite quietly then hammered meta with a $1.4B settlement 20 years after the bill's enactment. Once these laws are enacted, they lay quietly until someone has a big enough bone to pick with someone else. There are already many traumatic events occurring downstream from slapdash AI development.
I see a bright future for the internet
That’s because they can’t be.
People assume they’ve already figured out how AI behaves and that they can just mandate specific "proper" ways to use it.
The reality is that AI companies and users are going to keep refining these tools until they're indistinguishable from human work whenever they want them to be.
Even if the models still make mistakes, the idea that you can just ban AI from certain settings is a fantasy because there’s no technical way to actually guarantee enforcement.
You’re essentially passing laws that only apply to people who volunteer to follow them, because once someone decides to hide their AI use, you won't be able to prove it anyway.
They can publish all they want, they just have to label it clearly. I don’t see how that is a free speech issue.
This is a concept at least in some EU countries, that there has to always be one person responsible in terms of press law for what is being published.
However, I do not think starting there is a good idea: your progress in other areas would stall until that crook has been dealt with.
We already see this with the California label, it get's applied to things that don't cause cancer because putting the label on is much cheaper than going through to the process to prove that some random thing doesn't cause cancer.
If the government showed up and claimed your comment was AI generated and you had to prove otherwise, how would you?
https://apnews.com/article/sesame-allergies-label-b28f8eb3dc...
That's a concerning lens to view regulations. Obviously true, but for all laws. Regulations don't apply to only to what would be immediately observable offenses.
There are lots of bad actors and instances where the law is ignored because getting caught isn't likely. Those are conspiracies! They get harder to maintain with more people involved and the reason for whistle-blower protections.
VW's Dieselgate[1] comes to mind albeit via measurable discrepancy. Maybe Enron or WorldCom (via Cynthia Cooper) [2] is a better example.
[1]: https://en.wikipedia.org/wiki/Volkswagen_emissions_scandal [2]: https://en.wikipedia.org/wiki/MCI_Inc.#Accounting_scandals
Like every law passed forever (not quite but you get the picture!) [1]
Unless you're trying to tell me that writers won't report on their business that's trying to replace them with AI.
So legislators, should they so choose, could demand source material be recorded on C2PA enabled cameras and produce the original recordings on demand.
Without emotion, without love and hate and fear and struggle, only a pale imitation of the human voice is or will be possible.
Sounds like ignoring it worked fine for them then.
Most regulations around disclaimers in the USA are just civil and the corporate veil won't be pierced.
I know that sounds ridiculous but it kind of illustrates the problem with your logic. We don’t just write laws that are guaranteed to have 100% compliance and/or 100% successful enforcement. If that were the case, we’d have way fewer laws and little need for courts/a broader judicial system.
The goal is getting most AI companies to comply and making sure that most of those that don’t follow the law face sufficient punishment to discourage them (and others). Additionally, you use that opportunity to undo what damage you can, be it restitution or otherwise for those negatively impacted.
Why though? Did the AI play the role of an editor or did it play the role of a reporter seems like a clear distinction to me and likely anyone else familiar enough with how journalism works.
It's odd that legislators seem largely incapable of learning from the rich history of past legislative mistakes. Regulation needs to be narrowly targeted, clearly defined and have someone smart actually think through how the real-world will implement complying as well as identifying likely unintended consequences and perverse incentives. Another net improvement would be for any new regs passed to have an automatic sunset provision where they need to be renewed a few years later under a process which makes it easy to revise or relax certain provisions.
By that token bans on illegal drugs are fantasy. Whereas in fact, enforcement doesn't need to be guaranteed to be effective.
There may be little technical means to distinguish at the moment. But could that have something to do with lack of motivation? Let's see how many "AI" $$$ suddenly become available to this once this law provides the incentive.
Especially after you have already seen what your friend has already done for four years
Good god, this is pathetic. Do you financially gain from AI or do you think it's hard to prove someone didn't use it? Like this is the bare minimum and you're throwing temper tantrums...
The onus will be on the AI companies pushing these wares to follow regulations. If it makes it harder for the end user to use these wares, well too bad so sad.
But this just my uninformed opinion, perhaps those that work in the health industry think differently.
I think you have this exactly right. They are mostly enforced against the poor and political enemies.
That was with GPT4, but my own work with other LLMs show they have very distinctive styles even if you specifically prompt them with a chunk of human text to imitate. I think instruction-tuning with tasks like summarization predisposes them to certain grammatical structures, so their output is always more information-dense and formal than humans.
Why accuse your enemies of using AI-generated content in posts? Just call them domestic terrorists for violently misleading the public via the content of their posts and send the FBI or DHS after them. A new law or lack thereof changes nothing.
Please don't misrepresent what someone says. That does not lead to constructive dialog.
I gave a question challenging a specific way to regulate a specific thing, to indicate it is challenging. This is not the same as dismissing all regulations.
Also, please avoid the personal mentions.
>The onus will be on the AI companies pushing these wares to follow regulations.
That wasn't the challenge. The raised issue isn't AI companies labeling things AI. The given example included them very much following the regulation.
I don't know how much automatically opting everyone in to automatic photo tagging made Meta, but I assume its "less than 100% of their revenue".
Barring the point of contention being integral to the business's revenue model or management of the company being infected with oppositional defiant disorder a lawsuit is just an opportunity for some middle manager + team to get praised for making a revenue-negative change that reduces the risk of future fines.
Work like that is a gold mind; several people will probably get promoted for it.
Relative to no war on drugs? Who knows.
It's much easier to tell yourself prop 65 doesn't have to be avoided because "it's probably just there to cover their asses" wile tobacco products have real warnings that definitely mean danger (though there are people who convince themselves otherwise_
Just a quick Google search g estimates that less than 3% of drugs are intercepted by the government.
I always wanted to try specific two, but first cannot be had in the safest form because of the specific precursor ban, and all of them suffer from an insane (to me) risk of adulteration.
In twenty minutes I could probably find 10 "reputable" shops/markets, but still with 0 guarantee I won't get the specific thing laced with something for strength.
Even if I wanted pot (I don't, I found it repetitive and extremely boring, except for one experience), I would have to grow it myself (stench!) but then.... where I find sane seeds (healthy ratio CBD to THC)?
Similarly, I wouldn't buy the moonshine from someone risking prosecution to make and sell it. It's guaranteed this risk is offset.
So ... I can't get what I want because there's extremely high chance of getting hurt. An example being poisoning with pills sold as mdma - every music festival, multiple people hurt. Not by Molly, by additives.