FINAL Financial hours of U.S.A. just before the 1929 crash
https://www.youtube.com/watch?v=dxiSOlvKUlA&t=1008s
The Volcker Shock: When the Fed Broke the Economy to Save the Dollar (1980)
https://www.youtube.com/watch?v=cTvgL2XtHsw
How Inflation Makes the Rich Richer
We might be transitioning to a world where trust has value and is earned and stored in your reputation. Clickbait is a symptom of people valuing attention over trust. Clickbait spends a percentage of their reputation by trading it for attention.
In a world of many providers, most people have not heard of any particular individual provider. This means they have no reputation to lose, so their choice to act in a reputation losing manner is easy.
Beyond a certain scale when everyone can play that game we end up with the problem that this article describes. The content is easy but vacuous. There are far more people vying for the same number of eyballs now.
The solution is, I believe, earned trust. Curators select items from sources they trust. The ones that do a good job become trusted curators. In a sense HackerNews is a trusted curator. Reddit is one that is losing, or has lost, trust.
AI could probably take on some of the role of that curation. In the future perhaps more so. An AI can scan the sources of an article to see if the sources make the claims that the article says it makes. I doubt it can do so with sufficient accuracy to be useful right now, but I don't think that is too far off.
Perhaps the various fediverse reddit clones had the wrong idea. Maybe they should in a distributed fashion where each point is a subreddit analogue operated each with their own ways of curation, then an upper level curation can make a site of the groups they trust.
This makes a multi level trust mechanism. At each level there are no rules governing behaviour. If you violate the values of a higher layer, they lose trust in you. AI could run its own curation nodes. It might be good at it or it might be terrible, it doesn't really matter. If it is consistently good, it earns trust.
I don't mind there being lots of stuff, if I can still find the good stuff.
I just reached out to my family for any trustworthy builders they've had, and struck up conversations with some of my fancier neighbors for any recommendations.
(I came to the conclusion that all builders are cowboys, and I might as well just try doing some of this myself via youtube videos)
Using the internet to buy products is not a problem for me, I know roughly the quality of what I expect to get and can return anything not up to standard. Using the internet to buy services though? Not a chance. How can you refund a service
Interviewer: How will humans deal with the avalanche of fake information that AI could bring?
YNH: The way humans have always dealt with fake information: by building institutions we trust to provide accurate information. This is not a new phenomenon btw.
In democracies, this is often either the government (e.g. the Bureau of Labor Statistics) or newspapers (e.g. the New York Times) or even individuals (e.g. Walter Cronkite).
In other forms of government, it becomes trust networks built on familial ties e.g. "Uncle/Aunt is the source for any good info on what's happening in the company" etc
Maybe it could lead to a resurgence of the business model where you buy a program and don’t have to get married to the company that supports it, though?
I’d love it if the business model of “buy our buggy product now, we’ll maybe patch it later” died.
How can infinite AI content be strictly awful if it forces us to fix issues with our trust and reward systems? Short term, sure. But infinite (also) implies long term.
I wish I had a really smart game theorist friend who could help me project forward into time if for nothing other than just fun.
Don't get me wrong, I'm not trying to reduce the value of "ouch, it hurts right now" stories and responses.
But damned if we don't have an interesting and engaging problem on our hands right now. There's got to be some people out there who love digging in to complicated problems.
What's next after trust collapses? All of us just give up? What if that collapse is sooner than we thought; can we think about the fun problem now?
This is the biggie; especially with B2B. It's really 3 months, these days. Many companies have the lifespan of a mayfly.
AI isn't the new reason for this. It's been getting worse and worse, in the last few years, as people have been selling companies; not products, but AI will accelerate the race to the bottom. One of the things that AI has afforded, is that the lowest-tier, bottom-feeding scammer, can now look every bit as polished and professional as a Fortune 50 company (often, even more).
So that means that not only is the SNR dropping, the "noise" is now a lot riskier and uglier.
I stopped accepting telephone calls before 2010. They still ring the phone.
No structure, outdated stuff marked as "preview" from 2023/2024, wikipedia like in depth articles about everything but not for simple questions like: how to implement a backend for frontend.
You find fragments and pieces of information here and there - but no guidance at all. Settings hidden behind tabs etc.
A nightmare.
No sane developer would have done such a mess, because of time constraints and bloat. You see and experience first hand, that the few gems are from the trenches, with spelling mistakes etc.
Bloat for SEO, the mess for devs.
Growing on X is so simple I’m shocked it works.
100x comments a day
10x posts a day
15x DM’s a day
1x thread a day
1x email a day
This is how you grow your presence on X.
Even if having a presence matters, how can you actually say something meaningful if you post 10 times a day - there's no way (unless you just repeat yourself). Hopefully my algorithm's just gone weird but sadly the people I used to follow stopped posting.
It's not just that it is 0 effort, it also sucks, and it is increasingly not relevant because their agents are just scooping up stuff to reach out about and they aren't even selling something that you would need to buy.
I just wish that we could go back to the old way. There should be a cost to attempt to get a sales lead.
Just kidding, that just goes into my RL trash can.
There are a lot of people here who have spent a lot of time, money, and effort building AI products. And they may be good and worthwhile! I'm literally one of them. But you still see some people totally underestimate this public-facing trust collapse and the growing anti-AI sentiment in general.
In order for any AI/ML content to have value it must cite where the accumulation of information came from. By not doing so it nothing more than a custom Wikipedia-esque with a motto _Trust Me Bro_.
Citations and lack there of should be a simple key factor in evaluating trust. What are your sources for this idea / answer?
The person you're physically interacting with might be using AI in their workflow, that's fine. I use AI too. I just don't want to "build a relationship" with AI. I don't care for AI "content". Art, blogs, articles, advertisements, even detestable things like sales and marketing are all forms of human relationships to me. It's fine if you wanna autogenerate it. I'm even "in the market" for autogenerates stuff as I use AI bots too, but you can't try to sell me a 100% automated-burger when I have the Fabricator 3000 too.
If I'm hungry and just want a burger, I'll get my Fabricator 3000 to generate one for me. If I'm in the mood for a human touch on food and a dining experience, I'll cook, go to a (reputable) restaurant or a friend's place who likes to cook. Maybe there is a market for you to run your Fabricator 3000 to generate a burger for me. maybe. I don't know why I'd buy it though when I can just get your prompt and feed it into my own Fabricator 3000...
When I need to be sure about something, I check it manually in trusted sources. If I am not sure, I check all the sources I can. More recently I run deep research on 3 agents (Claude, ChatGPT and Gemini) and then compare between the reports.
One small one I do not agree with is "Are you burning VC cash on unsustainable unit economics?". I think it's safe to conclude by now that unsustainable businesses can be kept alive for years as long as the investors want it.
The last five times I've looked at something in case it was a legitimate user email it was AI promotion of someone just like in the article.
Their only way to escalate, apart from pure volume, is to take pains to intentionally emulate the signals of someone who's a legitimate user needing help or having a complaint. Logically, if you want to pursue the adversarial nature of this farther, the AIs will have to be trained to study up and mimic the dialogue trees of legitimate users needing support, only to introduce their promotion after I've done several exchanges of seemingly legitimate support work, in the guise of a friend and happy customer. All pretend, to get to the pitch. AI's already capable of this if directed adeptly enough. You could write a script for it by asking AI for a script to do exactly this social exploit.
By then I'll be locked in a room that's also a Faraday cage, poking products through a slot in the door—and mocking my captors with the em-dashes I used back when I was one of the people THEY learned em-dashes from.
One thing about it, it's a very modern sort of dystopia!
That’s why we’re seeing so much semantic drift too. The forms of credibility survive, but the intent behind them doesn’t. The system works, but the texture that signals real humans evaporates. That’s the trust collapse. Over optimized sameness drowning out the few cues we used to rely on.
I predict a renaissance of meeting people in person.
One such example was call centers. In the 2000s implementing a call center in India was all the rage on cost cutting. The customer experience was terrible and suddenly having a US-based call center (the thing companies just abandoned) was now a feature.
I think we’ll see similar things with AI. Everyone will get flooded with AI slop. Folks will get annoyed and suddenly interacting with a real human or a real human writing original content will be a “feature” that folks flock to.
I follow even AI slop via reddit RSS.
I control however what comes in.
This of course means that Freenow is now on the personal blacklist. People should not engage with companies who advertise with "AI" slop.
That's actually wonderful result. Humans and their messages are not to be trusted. It's a bit late that we had to make AI to show us that.
If you want to get ahead, you'll need to find the 1% edge and exploit it for 15 minutes until a competitor erases your lead.
If it's not? Oh well, suffer. It's still better than the "average western male on a dating site" experience.
Note: I really like the metaphor. My apologies if I abused it or stretched it to far or in the wrong direction.
It's just funny, even by hand, to be writing in the infinite AI content style while lamenting the awfulness of infinite AI content while co-founding a monetization and billing system for AI agents.
Made my day. So true.
But you can’t really even make the case to them anymore because like you said they can’t/won’t even read your email.
What mostly happens is they constantly provide free publicity to existing big players whose products they will cover for free and/or will do sponsored videos with.
The only real chance you have to be covered as a small player is to hope your users aggregate to the scale where they make a request often enough that it gets noticed and you get the magical blessing from above.
Not sure what my point is other than it kinda sucks. But it is what it is.
I hope that will come to fruition.
Also, this is entirely hand-written ;)
The real question to ask in this gold rush might be what kind of shovels we can sell to this corner of hand shakers and lunchers. A human-verifiable reputation market? Like Yelp but for "these are real people and I was able to talk to an actual human." Or diners and golf carts, if you're not into abstractions.
I do love the irony of someone building a tool for AI sales bots complaining that their inbox is full of AI sales slop. But I actually agree with the article’s main idea, and I think if they followed it to its logical conclusion they might decide to do something else with their time. Seems like a great time to do something that doesn’t require me to ever buy or sell SaaS products, honestly.
You're assuming they can be fixed.
> But damned if we don't have an interesting and engaging problem on our hands right now. There's got to be some people out there who love digging in to complicated problems.
I'm sure the peasants during Holomodor also thought: "wow, what an interesting problem to solve".
because you know the brands and trust them, to a degree
you have prior experience with them
Larry Fink and The Money Owners.
There are many equilibrium points possible as a result. Some have more trust than others. The "west" has benefited hugely from being a high trust society. The sort of place where, in the Prisoner's Dilemma matrix, both parties can get the "cooperate" payoff. It's just that right now that is changing as people exploit that trust to win by playing "defect", over and over again without consequence.
https://en.wikipedia.org/wiki/High-trust_and_low-trust_socie...
I'll trust my doctor to give me sound medical advice and my lawyer for better insights into law. I won't trust my doctor's inputs on the matters of law or at least be skeptical and verify thoroughly if they are interested in giving that advice.
Newspapers are a special case. They like to act as the authoritative source on all matters under the sun but they aren't. Their advice is only as good as their sources they choose and those sources tend to vary wildly for many reasons ranging from incompetence all the way to malice on both the sides.
I trust BBC to be accurate on reporting news related to UK, and NYT on news about US. I wouldn't place much trust on BBC's opinion about matters related to the US or happenings in Africa or any other international subjects.
Transferring or extending trust earned in one area to another unrelated area is a dangerous but common mistake.
Funny that he doesn’t say that the institutions have to provide accurate information, but just that we have to trust them to provide accurate information.
Wall Street, financier centric and biased in general. Very pro oligarchy.
The worst was their cheerleading for the Iraq war, and swallowing obvious misinformation from Colin Powell at face value.
It's annoying because the whole point of a lot of this stuff is that it's real, and one can be informed, entertained or have an emotional response to it. When you distrust everything because it's maybe fake, then the fun of the internet as a window into human nature and the rest of the world just disappears.
nevermind if the things are people or their lives!!
Now this formula has been complicated by technological engineering taking over aspects of marketing. This may seem to be simplifying and solving problems, but in ways it actually makes everything more difficult. Traditional marketing that focused on convincing people of solutions to problems is being reduced in importance. What is becoming most critical now is convincing people they can trust providers with potential solutions, and this trust is a more slippery fish than belief in the solutions themselves. That is partly because the breakdown of trust in communication channels means discussion of solutions is likely to never be heard.
Except those institutions have long lost all credibility themselves.
you need to prove beyond a doubt that YOU are the right one to buy from, because it's so easy for 3 Stanford dropouts in a trenchcoat to make a seemingly successful business in just a few days of vibecoding.
Make friends and work with people where possible. I get that some of this only works for us open source types, but the microphone guy isn't, he just did good work. I initially heard of his company through a pro sound engineer website, and ran with it when the advice turned out to be good.
The modern software market actually seems like a total inversion of normal human bartering and trade relationships, actually…
In Ye Olden Days, you go to the blacksmith, and buy some horseshoes. You expect the things to work, they are simple enough that you can do a cursory check and at least see if they are plausibly shaped, and then you put them on your horse and they either work or they don’t. Later you sell him some carrots, buy a pot: you have an ongoing relationship checkpointed by ongoing completed tasks. There were shitty blacksmiths and scummy farmers, but at some point you get a model of how shitty the blacksmith is and adjust your expectations appropriately (and maybe try to find somebody better when you need nails).
Ongoing contracts were the domain of specialists and somewhat fraught with risk. Big trust (and associated mechanics, reputation and prestige). Now we’re negotiating an ongoing contracts for our everyday tools, it is totally bizarre.
hopefully soon we move onto judging content by its quality, not whether AI was used. banning digital advertisement would also help align incentives against mass-producing slop (which has been happening long before ChatGPT released)
Wow. A new profile text for my Tinder account!
> nevermind if the things are people or their lives!!
Breaking things is ok. If people are things then it's ok to break them, right? Got it. Gotta get back to my startup to apply that insight.
I'm with that group of people. What was your point in bringing this up?
Wait, was I just trolled? If so, lol. Got me!
You can't regress back to a being a kid just because the problems you face as an adult are too much to handle.
However this is resolved, it will not be anything like "before". Accept that fact up front.
If you try to “go back” you’ll just end up recreating the same structure but with different people in charge
Meet the New boss same as the old boss - biological humans cannot escape this state because it’s a limit of the species
If you are not building the next paperclip optimizer the competition already does!
It is completely to be expected, exactly because it is not new.
It's been scarcely a generation since the peak in net change of the global human population, and will likely be at least another two generations before that population reaches its maximum value. It rose faster than exponentially for a few centuries before that (https://en.wikipedia.org/wiki/World_population#/media/File:P...). And across that time, for all our modern complaints, quality of life has improved immensely.
Of all the different experiences of various cultures worldwide and across recent history, "growth" has been quite probably the most stable.
Culture matters. People's actions are informed by how they are socialized, not just by what they can observe in the moment.
How do you know that? Or is it just that your bias is coybows are bad and so you assume someone who dresses and acts better is better?
Now step back, I'm not asking you personally, but the general person. It is possible that you have the knowledge and skills to do the job and so you know how to inspect it to ensure it was done right. However the average person doesn't have those skills and so won't know the well dressed person who does a bad job that looks good from the poorly dressed person who does a good job but doesn't look as good.
This is just how I write in the last few years
Would this truly be a move back? I've met people outside my social class and disposition who seem to rely quite heavily on networking this way.
In any case, I can’t complain anyway because I have received my share of favorable coverage. It is just less frequent when you don’t have the personal connections.
For every standard to be met, you compromise either on cash or time.
I disagree with people suggesting removing noisy RSS feeds, as some are noisy, yet sometime useful. I think RSS client needs advanced filtering and search.
I use my own project for RSS https://github.com/rumca-js/Django-link-archive, but it should be considered 'alpha' as I move fast and break things. It provides me functionality that I miss in most of the clients, and it is a web client, so I can access it through mobile and PC, no need for sync. I am not a front-end dev so it does not look that much appealing.
https://www.theatlantic.com/technology/archive/2025/08/youtu...
https://www.nbcnews.com/tech/tech-news/youtube-dismisses-cre...
Nit: that is not how it worked. You took your horse to the blacksmith and he (almost always he - blacksmiths benefit from testosterone even if we ignore the rampant sexism) make shoes to fit. You knew it was good because the horse could still walk (if the blacksmith messes up that puts a nail in their flesh instead of the hoof and the horse won't walk for a few days while it heals). In 1600 he made the shoes right there for the horse, in 1800 he bought factory made horseshoes and adjusted them. Either way you never see the horseshoes until they are one the horse and your check is only that the horse can still walk.
Moreover, the more political a topic the more likely the author is trying to influence your thoughts (but not me I promise!). I forgot who, but a historian was asked why they wouldn’t cover civil war history, and responded with something to the affect of “there’s no way to do serious work there because it’s too political right now”.
It’s also why things like calling your opponents dumb, etc is so harmful. Nobody can fully evaluate the truthfulness of your claims (due to time, intellect, etc) but if you signal “I don’t like you” they’re rightfully going to ignore you because you’re signaling you’re unlikely to be trustworthy.
Trust is hard earned and easily lost.
I once went to a school that had complementary subscriptions. The first time I sat down to read one there was an article excoriating President Bush about hurricane Katrina. The entire article was a glib expansion of an expert opinion who was just some history teacher who said that it was “worse than the battle of Antietam” for America. No expertise in climate. No expertise in disaster response. No discussion of facts. “Area man says Bush sucks!” would have been just as intellectually rigorous. I put the paper back on the shelf and have never looked at one since.
Don’t get emotionally attached to content farms.
It’s simply reality, or else propaganda wouldn’t work so well.
"We shape our tools, and thereafter our tools shape us", may be the apposite bon mot.
That aside, I did enjoy your article. Thank you.
Net-growth society: new wealth is being created, if you can be part of the creation you get wealth
No-growth society: only way to acquire wealth is to take it from someone else
Oh plus because essentially every society that experienced it legislated it's way into a no-growth situation. The problem was not that growth was not possible, it's that people used state power, for a lot of different excuses, to prevent growth (and of course really to secure the position of the richest and most powerful in society)
The excuses range from religion, morality separate from religion, wars, avoiding losing wars (and putting the entire economy in a usually futile attempt to win or avoid losing a war) and of course the whole thing feeding onto itself: laws protecting the rich at the direct expense of the poor (that can happen even if there is economic growth, though of course, the more growth the less likely)
Btw: "futile attempt to win or avoid losing a war" these attempt were futile not because they lead to a win or a loss, but because the imposed cost of a no-growth society far exceeded any gains or even avoided losses ...
If they didn't, we wouldn't be having these problems.
The problem isn't AI, it's how marketing has eaten everything.
So everyone is always pitching, looking for a competitive advantage, "telling their story", and "building their brand."
You can't "build trust" if your primary motivation is to sell stuff to your contacts.
The SNR was already terrible long before AI arrived. All AI has done is automated an already terrible process, which has - ironically - broken it so badly that it no longer works.
Our issue was water intrusion along a side wall that was flowing under our hardwoods, warping them and causing them to smell. The first contractor replaced the floor and added in an outside drain.
The drain didn't work, and the water kept intruding and the floor started to warp again.
When we got multiple highly rated contractors out, all of them explained that the drain wasn't installed correctly, that a passive drain couldn't prevent the problem at that location, and that the solution was to either add an actively pumped drain or replace the lower part of the wall with something waterproof. We ended up replacing that part of the wall, and that has fixed the issue along that wall. (We now have water intrusion somewhere else, sigh).
If anything, I was originally biased for the cowboy, as they came recommended, he and his workers were nice, and the other options seemed too expensive & drastic. Now I've learned my lesson, at least about these types of trickier housing issues.
Also, no one mentioned evaluating someone by how they're dressed - the issue was family/friend recommendations vs online reviews, and I while I do take recommendations from friends and family into account, I've actually had better luck trusting online (local) reviews.
Human biological limits prevent the realization of stable equilibrium at the scale of coordination necessary for larger emergent superstructures
Humans need to figure out how to become a eusocial superorganism because we’re past the point where individual groups don’t produce externalities that are existential to other groups/individuals
I don’t think that’s possible, so I’m just building the machine version
The police are happy they are paid. The victims are sad they are hurt. Is society better as a whole because it can handle crime? I'm not sure.
What does bad mean? Seems like an overloaded concept, ask around and good luck.
You can have a lot more fun by completely reducing the original question and plugging in different values for "strictly awful" and "AI content" and "it forces us to..."
How can eating be good if we just get hungry again? Implies eating is bad despite the value we derive from it.
How can hard work be bad if it produces meaningful results? Implies hard work is good despite the pains we take on from it.
I would argue that this kind of reduction and replacement significantly changes the original question, but it is a fun thing to explore. I'm not sure we'll get closer to an answer to the original, though. And I'm not sure it's safe to take the answer from one of the derived questions and use it for the original.
But don't take my word for it, I'm mostly restating one of the key points from Thinking Fast and Slow.
Can I safely assume that what you were implying is that AI content is undesirable because it is a strain on human systems? I think that's the point the article was trying to make.
Well, no worries. If you subscribe to the post+ service I’ll fix it in a couple years, promise.
But billionaires are making and keeping ever more money than before, so it isn't a problem.
The key is that distrusting one side or source does not logically entail trusting another source more. If you think that the media or medical establishment is wrong, say, 45% of the time, you still have to find a source of information that is only wrong 40% of the time to prefer it.
The problem is that often we have to choose because decisions are binary: either we get a vaccine or not. For example, to decide not to get a vaccine, the belief that the medical establishment are lying liars is just not enough. We must also believe that the anti-vaxxers are more knowledgeable and trustworthy than the medical establishment. Doctors could be lying 60% of the time and still be more likely to be right than, say, RFK. It's not enough to only look at one side; we have to compare two claims against each other. For the best outcome, you have to believe someone who's wrong 80% of the time over someone who's wrong 90% of the time. Even if you believe in a systemic, non-random bias, that doesn't help you unless you have a more reliable source of information.
And this is exactly the inconsistent epistemology that we see all around us: People reject one source of information by some metric they devise for themselves and then accept another source that fails on that metric even more.
Regardless, clearly labeled opinions are standard practice in journalism. They're just not on the front page. If you saw that on the front page, then I'd need more context, because that is not common practice at NYT.
This, too, goes into the probability of something being right or wrong. But the problem I'm pointing out is an inconsistent epistemology. The same kind of test should be applied to any claim, and then they have to be compared. When people trust a random TikToker over the NYT, they're not applying the same test to both sides.
> It’s also why things like calling your opponents dumb, etc is so harmful.
People who don't try to have any remotely consistent mechanism for weighing the likelihood of one claim against a contradicting one are, by my definition, stupid. Whether it's helpful or harmful to call them stupid is a whole other question.
Could you quickly summarize how and why you felt let down by the media in regards to COVID?
That is false. You build a different type of trust: people need to trust that when they buy something from you it is a good product that will do what they want. Maybe someone else is better, but it won't be enough better as to be worth the time they would need to spend to evaluate that. Maybe someone else is cheaper, but you are still reasonably priced for the features you offer. They won't get fired for buying you because you have so often been worthy of the trust we give you that in the rare case you do something wrong it was nobody is perfect not that you are no longer trustworthy (you can only pull this trick off a few times before you become untrustworthy)
The above is very hard to achieve, and even when you have it very easy to lose. If you are not yet there for someone you still need to act like you are and down want to lose it even though they may never buy from you often enough to realize you are worth it.
Evil contains within itself the seed of its own destruction ;)
Sure, sometimes you should fight the decline. But sometimes... just shrug and let it happen. Let's just take the safety labels off some things and let problems solve themselves. Let everybody run around and do AI and SEO. Good ideas will prevail eventually, focus on those. We have no influence on the "when", but it's a matter of having stamina and hanging in there, I guess
The people yearn for the casino. Gambling economy NOW! Vote kitku for president :)
PS. Please don't look at the stock market.
Wired: "Build things society needs"
LOL
A society like that may be quite different in innumerable ways, of course, and the idea of “wealth” in the way we understand it may not make sense.
Wealth is created by work. In any society, be it growth or no-growth, you can create and acquire wealth by working. (Not necessarily for a wage. Working for yourself also creates wealth. Every time you make yourself dinner, or patch a torn pant leg, or change your car's oil, you are creating wealth.)
The problem is that non-working parasites (investors, rent-seekers, warlords) can't acquire wealth in a no-growth society without taking it from someone else.[1] (Because in a no-growth society, investing on the net is ~zero-returns, ~zero-value.)
------
[1] They take it from someone else in a growth society, too, but a person who works and loses half their productive surplus to a rent seeker is still getting the benefits of growth. In a no-growth society, the rent-seeker's gain is 100% someone else's net loss.
That assumes people have the ability to choose not to do these things, and that they can't be manipulated or coerced into doing them against their will.
If you believe that advertising, especially data-driven personalised and targeted advertising, is essentially way of hacking someone's mind to do things it doesn't actually want to do, then it becomes fairly obvious that it's not entirely the individual's fault.
If adverts are 'Buy Acme widgets!' they're relatively easy to resist. When the advert is 'onion2k, as a man in his 40s who writes code and enjoys video games, maybe you spend too much time on HN, and you're a bit overweight, so you should buy Acme widgets!' it calls for people to be constantly vigilant, and that's too much to expect. When people get trapped by an advert that's been designed to push all their buttons, the reasonable position is that the advertiser should take some of the responsibility for that.
I'd love to see the machine version or hear more of your thoughts about what goes into it.
>How can infinite AI content be strictly awful if it forces us to fix issues with our trust and reward systems?
I should have quoted what I just did in my original reply, I feel like I wasted your time by not including it. Still you did post interesting things so not all is lost.
>how can crime be bad if it forces us to police crime?
crime can be bad whether we police it or not. we actually police crime because its bad, at least in societies that are so inclined to have a police force. a desire to reduce somethings occurrence is not speaking positively of such occurrences.
> How can infinite AI content be strictly awful if it forces us to fix issues with our trust and reward systems?
this is neither a disqualifier for being "strictly awful", nor the newly arrived unique event finally necessitating fixes to trust and reward systems. I would hope that we dont evaluate the goodness of AI based on whether we have functioning trust and reward systems.
And a lot of the time, that trust is specific to a topic, one which matters to them personally. If they cannot directly verify claims, they can at least observe ways in which their source resonates with personal experience.
Within the last year I opened an Instagram account just so I could get updates from a few small businesses I like. I have almost no experience with social media. This drove home for me just how much the "this is where their attention goes, so that's revealed preference" thing is bullshit.
You know what I want? The ability to get these updates from the handful of accounts I care about without ever seeing Instagram's algo "feed". Actually, even better would be if I could just have an RSS feed. None of that is an option. Do I sometimes pause and read one of the items in the algo feed that I have to see before I can switch over to the "following" tab? I do, of course, they're tuned to make that happen. Does that mean I want them? NO. I would turn them off if I could. My actual fucking preference is to turn them off and never see them again, no matter that they do sometimes succeed in distracting me.
Like, if you fill my house with junk food I'll get fatter from eating more junk food, but that doesn't mean I want junk food. If I did, I'd fill my house with it myself. But that's often the claim with social media, "oh, it's just showing people more of what they actually want, and it turns out that's outrage-bait crap". But that's a fucking lie bolstered by a system that removes people's ability to avoid even being presented with shit while still getting what they want.
Most ads are just manipulating me, but there are times I need the thing advertised if only I knew it was an option.
It's fundamentally exploitation on a population scale, and I believe it's immoral. But because it's also massively lucrative, capitalism allows us to ignore all moral questions and place the blame on the victims, who again, are on the wrong side of a massive power imbalance.
If that resonates further let me know at my un on icloud domain
Your last point helps me tease out what I think rubs me the wrong way. Another analogy, "these newly introduced, extremely fast cars make it entirely unsafe to drive drunk."
Of course to be fair, we'd have to point out that the purchase, operation and production (and more) of said vehicles has a terrible impact.
I'd just love to hear that we are going to crack down on drunk driving which was even a problem when we were going slower. Obviously, the metaphor falls apart - trust and reward are much more interesting nuts to crack.
It's a really hard point to make because expressing an interest in wanting to see one part of the problem solved seems to indicate to others that I don't care about all the other aspects.
Call me naive, but I think education can help.
What authority are you going to complain to to "correct the massive power imbalance"? Other than God or Martians I can't see anything working, and those do not exist.
Corruption generally works by inflicting a diluted, distributed harm. Everyone else ends up a bit worse off except for the agent of corruption, which ends up very well off.
From my experience, there absolutely is. It just isn't legible to you.
The only thing I can come up with is that they do believe rigorous scholarship can arrive at answers, but sometimes those who do have the "real answers", lie to us for nefarious reasons. The problem with that is that this just moves the question elsewhere: how do you decide, in a non-arbitrary way, whether what you're being told is an intentional lie? (Never mind how you explain the mechanism of lying on a massive scale.) For example, an epistemology could say that if you can think of some motivation for a lie then it's probably a lie, except that this, too, is not applied consistently. Why would doctors lie to us more than mechanics or pilots?
Anohter option could be, "I believe things I'm told by people who care about me." I can understand why someone who cares about me may not want to lie to me, but what is the mechanism by which caring about someone makes you know the truth? I'm sure that everyone has had the personal experience of caring about someone else, and still advising them incorrectly, so this, too, quickly runs into contradictions.
First, show me a person who believes all of them.
Then, try asking that person.
You are trying to ask me to justify entire worldviews. That is far beyond the scope of a single HN post, and also blatantly off topic.
And I did ask such people such questions - for example, people who fly a lot yet and "chemtrails" are poisoning us - but their answers always ended up with some arbitrary choice that isn't appliled consistently. Pretty much, when forced to choose between claims A and B, they go by which of them they wish to be true, even if they would, in other situations, judge the process of arriving at one of the conclusions to be much stronger than the other. They're more than happy to explain to you that they trust vitamins because of modern scientific research, which they describe as fradulent when it comes to vaccines.
Their epistemology is so flagrantly inconsistent that my only conclusion was that they're stupid. I'm not saying that's an innate character trait, and I think this could well be the result of a poor education.
We’re living through the weirdest moment in human history.
For the first time since, well, ever, the cost of creating content has dropped to essentially zero. Not “cheaper than before”, but like actually free.
It’s so easy to generate a thousand blog posts or ten thousand “””personalized””” emails and it barely costs you anything (for now).
In theory this sounds great, infinite content. Giving words to those who struggled before.
So many opportunities.
But if you’re trying to sell, trust is collapsing faster than content is proliferating.
And I don’t mean “trust” in some abstract sense, but the real – who’s real, who’s credible, and what should I be paying attention to?
We’re not moving forward. We’re definitely moving backwards.
I know someone who runs a B2B SaaS company’s sales – he’s very smart. He spent many many years building his network and being a “relationship builder” type of seller – earning trust the old-fashioned way.
When we spoke last Sunday, he told me “I ignore all email outreach, and I barely even pick up the phone if I don’t know who it is”
“I can’t tell if it’s a real person or someone who’s scraped my details. I used to be able to tell. Now I can’t. So I just… don’t engage. Not worth my time.”
To dumb it down for you if you’re doing this outbound – he’s not asking “do I need this product?” that you’re selling, but he’s asking “why should I trust you specifically to deliver it?” – and you’re not getting through with that.
If being in marketing has taught me something, is that the “rules” and “playbooks” optimize for the wrong question – we craft positioning that explains what we do and why it matters.
In many cases, your prospect already knows they need what you’re selling.
They don’t need another pitch deck explaining why AI coding assistants increase developer productivity or why they can send more e-mails with an AI SDR.
They’ve seen forty of those this month (can we talk about how many AI SDRs there are????)
Honestly, this hurts saying too, but they don’t need proof that outcome-based pricing aligns incentives better than seat licenses. They read one of my 40 articles on the topic already.
What they actually want to know is “why the hell would I buy it from you instead of the other hundred companies spamming my inbox with identical claims?”
And because everything is AI slop now, answering that question became harder for them.

I stole this from PHOS Creative
If you haven’t heard already, here are the differences between a marketing funnel and a trust funnel:
| Aspect | Marketing funnel | Trust funnel |
|---|---|---|
| Main Focus | Lead generation and conversions | Relationship building and customer loyalty |
| End Point | Purchase or conversion | Ongoing customer advocacy and retention |
| Content Strategy | Promotional, sales-focused | Helpful, customer-focused, value-driven |
| Success Metrics | Conversion rates, sales numbers | Satisfaction, repeat business, advocacy |
| Timeline | Short to medium-term | Long-term, compounding trust |
It’s clear that you need to be on that trust side!
It’s simply too boring.
When a Claude license is like, $10 a month – content creation costs are effectively zero. Now everyone can afford to look credible. Perfect grammar. Personalized outreach that references your LinkedIn posts (albeit badly).
All of it can be generated in minutes, with no human intervention.
Which means all of it is now suspect.
I now send all e-mail AND LinkedIn messages to the trash.
Why? Because I can actually tell they didn’t come from humans who care about me or my problem.
They’re all “vaguely” personalized, and they’re all “curious” about how I do something. Why? Because “I’m curious about…” is in a playbook that supposedly gets good responses.
This is the trust collapse for me: it’s not that I don’t believe your product works. I don’t believe you’re a real human who will still care after you sign the contract.
God, no. Stop.
That’s not the issue.
Old World (…-2024):
New World (2025-…):
The signal-to-noise ratio has hit a breaking point where the cost of verification exceeds the expected value of engagement.
So prospects don’t verify. They just assume everything is noise. This is why I (and my friend) ignore all outreach, and I’m far from alone.
The cognitive cost of determining “is this real?” for 200 messages exceeds the expected benefit of the 2-3 that might actually be valuable.
You have to understand – your prospects aren’t asking:
They’re asking:
And here’s the problem: they can’t tell if you don’t tell them.
I’m not writing this to be cynical, but because I want to make sure you play the right game.
Sure, you have to answer “why should I buy this product category?” but also “why should I buy it from you?” and your content and marketing has to match that.
Your opportunity is huge. Here is what we all must do:
Trust is still a human job: At least for now – AI may help you along the way but at least in 2025 only humans can build genuine emotional connection and credibility. You need that lasting loyalty and advocacy. Don’t rely on the so-called “personalized outbound” alone.
Yes, relevance and customization are critical: AI should help you segment prospects meticulously and flag when human follow-up is likely to deepen trust, not just automate at scale. Use the outbound engine to be real.
Again, keep a human involved: Too much automation can undermine trust. I expect at least some human engagement, especially in complex or high-value sales contexts.
Yes.
You’re representing a brand. And your brand must continuously earn that trust, even if you blend of AI-powered relevance. We still want that unmistakable human leadership.