1. Introduction: <https://news.ycombinator.com/item?id=47689648> (619 comments)
2. Dynamics: <https://news.ycombinator.com/item?id=47693678> (0 comments)
3. Culture: <https://news.ycombinator.com/item?id=47703528>
4. Information Ecology: <https://news.ycombinator.com/item?id=47718502> (106 comments)
5. Annoyances: <https://news.ycombinator.com/item?id=47730981> (171 comments)
6. Psychological Hazards: <https://news.ycombinator.com/item?id=47747936> (0 comments)
And this submission makes:
7. Safety: <https://news.ycombinator.com/item?id=47754379> (89 comments, presently).
There's also a comprehensive PDF version for those who prefer that kind of thing: <https://aphyr.com/data/posts/411/the-future-of-everything-is...> (PDF) 26 pp.
(Derived from aphyr's comment: <https://news.ycombinator.com/item?id=47754834>.)
- You can just use the same tools you use to train them to make them behave in some specific ways if some specific preconditions are met.
- You can also poison the training data, so that the LLMs are writing flawed code they are convinced is right because they saw it on some obscure blog but in fact it had some subtle flaw you planted.
- You can poison the prompts as they are automatically injected from "skills" found online.
You couple that with long running agents which may drift very var from the conditions where they were tested during the safety tests.
You add the fact that in this AI race war, there is some premium to run agents capable of advanced offensive security with full permission, pushed using yolo dark-pattern.
The training process is obscure and expensive so only really doable by big actors non replicable and non verifiable.
And of course, now safe developers (aka those not taking the insane risk of running what really is and should be called malware), can't get jobs, get no visibility for any of their work, drown into a sea of AI slop made using a prompt and a credit card, and therefore they must sell their soul.md and hype for the madness.
Oh ** that.
I have moderated all sorts of crap, and I am grateful that my worst has only been murders, hate speech, NCII, assaults, gore, and other forms of violence.
> I sometimes wish that the engineers working at OpenAI etc. had to see these images too. Perhaps it would make them reflect on the technology they are ushering into the world, and how “alignment” is working out in practice
This is a great idea. I’ve heard of new leaders being dropped in, and being sure they have a better handle on safety than the T&S teams.
Only after they engage with the issues, and have their assumptions challenged by uncaring reality, did they listen to the T&S teams.
There are a lot of assumptions on speech online that do not translate into operational reality.
On HN and Reddit, everyone complains about moderation and janitors, but I highly recommend coders take it as civic service and volunteer.
How can you meaningfully fix a mess, if you do not actually know what the mess is about?
I'm seeing that these tools are extremely powerful the hands of experts that already understand software engineering, security, observability, and system reliability / safety.
And extremely dangerous in the hands of people that don't understand any of this.
Perhaps reality of economics and safety will kick in, and inexperienced people will stop making expensive and dangerous mistakes.
This is true, and I believe that the "sufficient funds" threshold will keep dropping too. It's a relief more than a concern, because I don't trust that big models from American or Chinese labs will always be aligned with what I need. There are probably a lot of people in the world whose interests are not especially aligned with the interests of the current AI research leaders.
"Don't turn the visible universe into paperclips" is a practically universal "good alignment" but the models we have can't do that anyhow. The actual refusal-guards that frontier models come with are a lot more culturally/historically contingent and less universal. Lumping them all under "safety" presupposes the outcome of a debate that has been philosophically unresolved forever. If we get hundreds of strong models from different groups all over the world, I think that it will improve the net utility of AI and disarm the possibility of one lab or a small cartel using it to control the rest of us.
LLMs can hack, but also nmap made hacking easier do we make nmap illegal? We already have drones who kills people, now there is less human involvement, results are same. LLM can also make defending easier (at least for cyber security) but I guess real world security is not that different. Now evil things can be done faster, easier and at more scale. Also good things have the properties.
It’s another tool in the toolbox, the idea that some entity will able to censor or align it as naive as thinking internet can be controlled. Some will do and manage anyway, but it’s not any different china’s firewall.
Alignment is sold to us by companies like OpenAI and Anthropic , not because they care, because that gives them power and more control. When was the last time a big corporation actually cared about soft topics like this? Yes, never.
Geoffrey Hinton will not have his liver pecked out every day like Prometheus does.
What is unacceptable, and what I've used my entire life as a deliberate strategy to obfuscate personal affairs, deflect unpleasant conversations, and deal with fools I come across, is to mix of a small amount of truth within a complex web of lies and misdirection.
This approach deals with two main challenges of lying effectively: lying in a consistent way and resisting the urge to be caught out in the lie. The truth is an abyss, and it frequently finds its most trenchant opponents flinging themselves willingly into it.
The most important, revealing truths can be disclosed without any risk of being discovered, hiding in plain sight. The philosophers knew this and applied these lessons judiciously since the times of Plato. Sometimes speaking the truth is dangerous.
I sometimes wish LLMs displayed that cautious refrain when discussing difficult matters. In my estimation, AGI will not have been reached until the models can produce works as mischievous as Plato, Averroes, Rousseau, or Derrida.
We are a long way from that. The vanilla brand of lies put out today by LLMs are barely worth mentioning, even if troublesome.
It's when the lies mask a deeper and profound truth that we'll know the game is up.
I think the author is brushing against some larger system issues that are already in motion, and that the way AI is being rolled out are exacerbating, as opposed to a root cause of.
There's a felony fraudster running the executive branch of the US, and it takes a lot of political resources to get someone elected president.
In what world would I ever expect a commercial (or governmental) entity to have precise alignment with me personally, or even with my own business? I argue those relationships are necessarily adversarial, and trusting anyone else to align their "AI" tool to my goals, needs, and/or desires is a recipe for having my livelihood completely reassigned into someone else's wallet.
How did brains acquire this predisposition if there is nothing intrinsic in the mathematics or hardware? The answer is "through evolution" which is just an alternative optimization procedure.
The internet produced 4chan. Produced scammers. Produced fraud. Instrumental in spreading child porn. Caused suicides. Many people lost their lives due to bullying on the internet. Many develop have addictions to gaming.
To anyone who has given it some thought, any sufficiently advanced technology usually affects both in good and bad ways. Its obvious that something that increases degrees of freedom in one direction will do so in others. Humans come in and align it.
There's some social credit to gain by being cynical and by signalling this cynicism. In the current social dynamics - being cynical gives you an edge and makes you look savvy. The optimistic appear naive but the pessimists appear as if they truly understand the situation. But the optimists are usually correct in hindsight.
We know how the internet turned out despite pessimists flagging potential problems with it. I know how AI will turn out. These kind of articles will be a dime a dozen and we will look at it the same way as we look at now at bygone internet-pessimists.
This is response not just to this article, but a few others.
Virtually all of the arguments here could also be applied against the Internet itself.
I do think that safety is important. I'm particularly concerned about vulnerable people and sycophantic behavior. But I think it's better not to be a luddite. I will give a positively biased view because the article already presents a strongly negative stance. Two remarks:
> Alignment is a Joke
True, but for a different reason. Modern LLMs clearly don't have a strong sense of direction or intrinsic goals. That's perfect for what we need to do with them! But when a group of people aligns one to their own interest, they may imprint a stance which other groups may not like (which this article confusingly calls "unaligned model", even though it's perfectly aligned with its creators' intent). People unaligned with your values have always existed and will always exist. This is just another tool they can use. If they're truly against you, they'll develop it whether you want it or not. I guess I'm in the camp of people that have decided that those harmful capabilities are inevitable, as the article directly addresses.
> LLMs change the cost balance for malicious attackers, enabling new scales of sophisticated, targeted security attacks, fraud, and harassment. Models can produce text and imagery that is difficult for humans to bear; I expect an increased burden to fall on moderators.
What about the new scales of sophisticated defenses that they will enable? And for a simple solution to avoid the produced text and imagery: don't go online so much? We already all sort of agree that social media is bad for society. If we make it completely unusable, I think we will all have to gain for it. If digital stops having any value, perhaps we'll finally go back to valuing local communities and offline hobbies for children. What if this is our wakeup call?
1. AI becomes a highly protected technology, a totalitarian world government retains a monopoly on its powers and enforces use, and offers it to those with preexisting connections: permanent underclass outcome
2. Somehow the world agrees to stop building AI and keep tech in many fields at a permanent pre-2026 level: soft butlerian jihad
3. Futurama: somehow we get ASI and a magical balance of weirdness and dance of continual disruption keeps apocalypse in check and we accept a constant steady-state transformation without paperclipocalypse
Such a fear mongering position. You can learn to build pipe bombs already. Take any chemical reaction that produces gas and heat and contain it. Congratulations, you have a pipe bomb.
Meanwhile.. just.. ask an LLM if you can mix certain cleaning chemicals safely.
> I see four moats that could prevent this from happening.
Really? Because you just said:
> human brains, which are biologically predisposed to acquire prosocial behavior
You think you're going to constrain _human_ behavior by twiddling with the language models? This is foolishly naive to an extreme.
If you put basic and well understood human considerations before corporate ones then reality is far easier to predict.
Most countries have a pretty strong ban on most kinds of weapons, the US is one of the few that lets everyone run around with their rooty tooty point and shooty, but most countries have implemented bans. Some because the government doesn't want the people having them, and in others the citizens call for the bans because they don't like the idea of getting shot by their fellow citizens.
It won't be long before citizens and governments get tired of models being used for criminal activities and will eventually lay down laws around this. Models will have to be registered and safety tested, strict criminal prosecution will happen if you don't. And the big model companies will back their favorite politicians to ensure this will happen to.
Now, that in general will be helpful as there will still be more models, but it will still not be a free for all.
Putting aside malicious actors, the analogy here means benevolent actors could spend more time and money training AI models to behave pro-socially than than evolutionary pressures put on humanity. After all, they control the that optimization procedure! So we shouldn’t be able to point to examples of frontier models engaging in malicious behavior, right?
while under the umbrella of evolution, if you really want to boil it down to an optimization procedure then at the very least you need to accurately model human emotion, which is wildly inconsistent, and our selection bias for mating. If you can do that, then you might as well go take-over the online dating market
No matter what, common people are quickly losing agency in that discussion.
Good things do not all have the same properties - That’s mistaking an incomplete assertion for a complete one.
Cyber security is an attackers domain. Your security is typically because you are (were) not valuable enough to earn the attention of an attacker.
When LLMs make targeting you cost effective, you will have to spend more energy defending yourself. This means that you have less time to do other useful things, reducing your net utility, while increasing attackers utility.
Also - teams in these companies DO care, I have worked with them. The decision makers are regulated by the cadence of the quarterly share holders meeting. At that point things like safety are a cost center. Reducing safety spend while minimizing reduced time on site is rewarded by markets.
I guess I'm trying to wonder why this line of thinking (in theory) doesn't turn to paranoia about everybody. I don't know much ethics or political theory or anything.
This "just" is... not-incorrect, but also not really actionable/relevant.
1. LLMs aren't a fully genetic algorithm exploring the space of all possible "neuron" architectures. The "social" capabilities we want may not be possible to acquire through the weight-based stuff going on now.
2. In biological life, a big part of that is detecting "thing like me", for finding a mate, kin-selection, etc. We do not want our LLM-driven systems to discriminate against actual humans in favor of similar systems. (In practice, this problem already exists.)
3. The humans involved making/selling them will never spend the necessary money to do it.
4. Even with investment, the number of iterations and years involved to get the same "optimization" result may be excessive.
Large language models are not evolving in nature under natural selection. They are evolving under unnatural selection and not optimizing for human survival.
They are also not human.
Tigers, hippos and SARS-CoV-2 also developed ”through evolution”. That does not make them safe to work around.
I would go out a limb and say that current a could create a paperclip problem, given powerful enough tools.
I think the point was never to bring a solution or show any essence of reality. The point was being polemical and signalling savviness through cynicism.
A sludge of spyware and addiction machines which employ negative emotion and outrage to drive shareholder value?
"The internet" is a pretty big tent. Everything from text messages to streaming video to online gaming to social media to encyclopedias. I think 15 years ago you could make a strong case that the internet was mostly a net positive, I think now that is much more difficult. If governments are able to fully realise their plans for surveillance and control, it will almost certainly become a net negative. Of course with many positive aspects.
So likewise with AI, we should be careful to not make the same mistakes as we did with the internet so we can realise something that is mostly positive. We could absolutely have a world where AI is as beneficial as you believe it will be, but we don't get there through inaction, we get there by being deeply critical of the negative aspects of AI and ensuring that we don't let a small number of hyper scalers control our access to it.
If the AI tech keeps going at the direction it's going now, more and more people will start believing the world would be better if the internet and computer had never been invented.
You talk like the internet being a net positive is a given. It really isn't, especially after it's proven that it doesn't democratize power (see Arab Spring, and China, and the US, and everywhere.)
1. Introduction: 33,088 (https://news.ycombinator.com/item?id=47689648)
2. Dynamics: 3,659 (https://news.ycombinator.com/item?id=47693678)
3. Culture: 5,914 (https://news.ycombinator.com/item?id=47703528)
4. Information Ecology: 777 (https://news.ycombinator.com/item?id=47718502)
5. Annoyances: 7,020 (https://news.ycombinator.com/item?id=47730981)
6. Psychological Hazards: 199 (https://news.ycombinator.com/item?id=47747936)
Feedback from early readers was that the work was too large to digest in a single reading, so I split it up into a series of posts. I'm not entirely sure this was the right call; the sections I thought were the most interesting seem to have gotten much less attention than the introductory preliminaries.
It does. People drive these entities. People hide behind the liability shields and authority of these entities. Also notice that I generalized with the phrase “…and trusting anyone…”
I'm not an expert in political theory or ethics either, but in my worldview, power relationships matter in these discussions. I believe power and responsibility should go hand in hand, and I hold entities to a standard that is proportional to their power to influence others lives.
If an entity's power is decentralized, for example when it is democratically organized to some degree, then that disperses both power and responsibility.
Because they want to separate me from as much of my money as they can, and I want to keep as much of my money as I can.
Debatable whether it truly understands what it's doing or not, but the argument usually assumes that it does know what it's doing at least in that it's able to imagine outcomes and create plans to reach its singular goal, making it a very simple toy example of a misaligned system.
I think what I meant to say was, they're as simple to jailbreak as they were three years ago.
Different methods, still simple. Working with researchers that are able to get very explicit things out of them. Again, it feels much worse than before, given the capability of these models.
There's basically guardrails encoded into the fine-tuned layers that you can essentially weave through (prompting). These 'guardrails' are where they work hard for benevolent alignment, yet where it falls short (but enables exceptional capability alignment). Again, nothing really different than it was three years ago.
Not OP, but for me, kind family and friends, and various feel-good pieces of fiction and other writing, at least let me envision the possibility of a perfectly kind/dedicated/innocent/naieve individual who is truly on my side 100%. But even that is mostly imagination and fiction... although convincing others of that isn't necessairly an argument worth making.
Commercial entities have a fundamental purpouse of profit. While profit doesn't have to be a zero-sum game - ideally, everyone benefits in a somewhat balanced way - there's some fundamental tension, in that each party's profit is necessairly limited by the other party's.
Government entities have a fundamental purpouse of executing the will of the state, which is rather explicitly not the same thing as the will of you as an individual.
Both commercial and government entities also tend to involve multiple people, which gets statistics working against you - you really gathered that many people who would put your needs above their own, with exactly zero "imposters" - which in this context just means people with a bit of rational self interest?
> I guess I'm trying to wonder why this line of thinking (in theory) doesn't turn to paranoia about everybody. I don't know much ethics or political theory or anything.
Just because you're paranoid, doesn't mean they aren't out to get you. Trust, but verify.
You might not be able to put absolute blind trust in anybody. I certainly can't. However, one can hedge one's bets, and diversify trust. Build social circles of people with good character, good judgement, and calm temperments - and statistics will start working for you. It's unlikely they'll all conspire to betray you simultaniously, especially if you've ensured betrayal costs much and gains little. While petty and jealous people can indeed be irrational enough to betray under such circumstances, it'll be harder for them to create the kind of conspiracy necessary for mass betrayal that might cause significant enough damage to warrant proper paranoia. You might still have to watch out for gaslighters stealing credit (document your work!) and framing people (document your character!) and other such dishonest and manipulative behavior... but if everyone's looking out for the same thing, well, that's just everyone looking out for everyone else! That's a community looking out for each other, and holding everyone honest and accountable. Most find comfort in that, rather than the stress paranoia implies.
Put yourself in a room full of manipulators and schemers, on the other hand, and "parnoia about everyone" might be the only reasonable or rational response!
Ask any poor person in India what their sentiment is with tech - it is usually optimism.
> You talk like the internet being a net positive is a given. It really isn't, especially after it's proven that it doesn't democratize power (see Arab Spring, and China, and the US, and everywhere.)
The world is far more democratic now than before and I attribute it to technology because it reduces information asymmetry.
[1] https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
”The future” is happening because it is allowed in our current legal framework and because investors want to make it happen. It is not ”happening” because it is good or desirable or unavoidable.
I had never heard of you, and this article appeared very biased to me. I found the information ecology piece superior, shame that it went unnoticed; I will try to go through all of them. I admire the breadth of topics you’re covering and appreciate the many sources. They’re clearly written in your own voice and that is great to see, I guess I mostly reacted to not being fully aligned with your view.
I suspect that if you'd not broken up the post into a series of smaller ones, the sorts of folks who are unwilling to read the whole thing as you post it section by section would have fed the entire post to an LLM to "summarize".
(Apart from that, I'm generally suspect of evolution-based arguments because they are often structurally identical to saying “God willed it, so it must true”.)
the cost of the wrong answer to this question is so incredibly high that I hope nobody is sincerely asking an LLM for this information. The things people trust to "machine that gives convincing answers that are correct 90% of the time" continue to shock me
Frontier AI models get smarter every year, humans but humans don't get any smarter year over year. If you don't believe that somehow AI will just suddenly stop getting better (which is as much a faith-based gamble as assuming some rapturous outcome for AI by default), then you'd have to assume that at some point AI will surpass human intelligence in all fields, and the keep going. In that case human minds and overall will will be onconsequential compared to that of AI.
We must be living in completely different worlds. Claude and other agents have completely upended work for me and every single other software engineer I know.
Right, but the article seems to argue that there is some important distinction between natural brains and trained LLMs with respect to "niceness":
>OpenAI has enormous teams of people who spend time talking to LLMs, evaluating what they say, and adjusting weights to make them nice. They also build secondary LLMs which double-check that the core LLM is not telling people how to build pipe bombs. Both of these things are optional and expensive. All it takes to get an unaligned model is for an unscrupulous entity to train one and not do that work—or to do it poorly.
As you point out, nature offers no more of a guarantee here. There is nothing magical about evolution that promises to produce things that are nice to humans. Natural human niceness is a product of the optimization objectives of evolution, just as LLM niceness is a product of the training objectives and data. If the author believes that evolution was able to produce something robustly "nice", there's good reason to believe the same can be achieved by gradient descent.
Uh, what? People have been killing each other over values misalignments since there have been people. We invented civilization in part to protect our farms and granaries from people who disagreed with us on whose grain was in said granaries.
It really isn't. The whole point of the market system is to collectively align people's actions towards a shared target of "Pareto-optimized total welfare". And even then the alignment is approximate and heavily constrained due to a combination of transaction costs (which also account for e.g. externalities) and information asymmetries. But transaction costs and information asymmetries apply to any system of alignment, including non-market ones. The market (augmented with some pre-determined legal assignment of property rights, potentially including quite complex bundles of rules and regulations) is still your best bet.
There was a Japanese visual novel in the 2000s about a girl who was your personal maid, and was so devoted would always take your side in any conflict, accept and support you just the way you are, even if you were a horrid person to your friends. It turns out she was a ghost, or a kind of yokai, or something. Anyhoo, back on 2ch she attracted a fandom, and there was a second group of people on 2ch who labelled her a "useless person manufacturer" because if you actually had a person who always accepted you just the way you are and never pushed back, that can be actually a trap that prevents you from developing.
It's a theme that's relevant today when people have AI servitors that always glaze them. It puts even certain utopian AI fiction, like Richard Stallman's story "Made for You", into a whole new light.
Competition is only cooperation's favorite strategy.
(By choosing from competing groups we select more favorable cooperation partners, because there are too many to choose from.)
Both of our statements are true. darned doublethink.
Also, if you think this is just “LLM is bad”, I highly suggest reading the series first. The social impacts they talked about at the start of the series should resonate with a lot of people here and are exactly the kind of thing which people building systems should talk about. If you’re selling LLMs, you still want to think about how what you’re building will affect the larger society you live in and the ways that could go wrong—even if we posit sociopath/MBA-levels of disregard for impacts on other people, you still want to think about how LLMs change the fraud and security landscape, how the tools you build can be misused, how all of this is likely to lead to regulatory changes.
In 2025, we lost 22931 crores to cyber fraud - about 2.7 billion USD. People are now saying that they are relieved if the losses were only single digit crores lost.
India invented digital house arrests. There’s entire districts/cities where the primary revenue stream is from scams. Cops don’t want to involve themselves with cyber crimes because they can’t resolve them.
India’s information economy is so broken, that the idea that we are less or more democratic is not even relevant.
The amount of revenge porn, non-consensual intimate imagery released per day is heart wrenching.
I REALLY want to agree with you. I too want to talk about the good that tech can do. India cannot afford to talk about the good without dealing with the bad.
The motto of move fast and break things assumes someone else will pick up the pieces. This doesn’t hold true for India - we need to pick up the pieces.
I still remember his takedown of mongodb's claims with the call me maybe post years and years ago filling me with a good bit of awe.
That is fantasy. Information technology has created an unprecedented level of information asymmetry and the gap is widening everyday as the total computing capacity grows.
Before information era, the ruling class was roughly as blind as peasants. Population census took years, and sometimes outright impossible. The opaqueness was two-way. Now it's one way - people in power know everything about the citizens.
For the future, try to avoid prevaricating when you actually have a clear sense of what you want to argue. Instead of convincing me that you've weighed both options and found luddism wanting, you just come off as dishonest. If you think stridently, write stridently.
The kind of political propaganda that leads to the US reelecting a convicted rapist whose selects another rapist to lead the Department of Defense who then renames it to the Department of War and, true to the name, starts unilaterally attacking other countries.
Even the best possible set of "pro-social" stochastic guardrails will backfire when someone twists the LLM's dreaming story-document into a tale of how an underdog protects "their" people through virtuous sabotage and assassination of evil overlords.
Large language models are not under evolutionary pressure and not evolving like we or other animals did.
Of course there is nothing technical in the way preventing humans from creating a ”nice” computer program. Hello world is a testament to that and it’s everywhere, implemented in all the world’s programming languages.
> If the author believes that evolution was able to produce something robustly "nice", there's good reason to believe the same can be achieved by gradient descent.
I don’t see how one means there is any reason, good or not, to believe it is likely to be achieved by gradient descent. But note that the quote you copied says it is likely some entity will train misaligned LLMs, not that it is impossible one aligned model can be produced. It is trivial to show that nice and safe computer programs can be constructed.
The real question is if the optimization game that is capitalism is likely to yield anything like the human kind we just lucked out to get from nature.
Gain is obtained by the easiest means available. Your narrow definition of profit is seldom the easiest, cheating is far "superior" especially when it's legal for some.
> None of this requires taking anything away from any other party.
"required" and "preferred" (e.g. because it's far easier) are different like night and day.
Google trumps the search results with it's LLM box. There's only one reason to do that. They know their audience is not engaging in discretion.
> The things people trust to "machine that gives convincing answers that are correct 90% of the time" continue to shock me
People are having intimate relationships with chat bots. There's a deeper sociological problem here.
One of the ideas I've toyed with, even before all the AI hype, is a dumb, semi-adversarial servitor. Something to nag or taunt me about chores not done, to interrupt me when I'm doomscrolling, to use as a vessel for precommitment, to challenge me in various ways. I've been too lazy to build it thus far. Many tools overlap the problem space, so I shouldn't be using that as an excuse - perhaps I should give StayFocusd another shot.
Conflict and other stressors - in moderation, within the limits of one's ability to handle - are important for growth and health. A tree shielded from wind is weakened as it fails to develop stress wood and structural strength. A good debate can sharpen my thoughts and mind, walking to lunch keeps my cardiovascular system healthy, rising to life's various challenges gives me the security of knowing I can rise to the occasion and gives me more skills.
What you describe is factually not how human society formed.
I’ll ask you this: would India be better off without internet? If your ultimate goal were democracy, would you end internet to promote democracy in India?
(hint: there already exist examples like such)
Without information, there is no way a voter may know which person to vote for and whether to believe in them at all and you are easily susceptible towards manipulation.
It will become more clear when you try to answer this hypothetical: if your objective were to bring in more democracy in North Korea, would you allow the global internet to proliferate if you could? According to your theory, it would just make it worse in general.
I may think stridently (debatable) but I generally believe it is best to always try to meet in the middle if the goal is genuine discussion. This is my attempt at that.
No blaming trump on anything other than the people who voted for him is like blaming school shootings on anything other than guns:a popular American passtime, and complete and utter nonsense.
Whatever the difference between naturalness and a state of nature, it has nothing to do with education or middle-class existence.
I doubt it, but even from an irrational anger perspective, I hate that these idiots can do idiotic (and worse, counter productive) stuff, and get no comeback on themselves.
> The issue with most of these articles is that they seem to demonize the technology, and systematically use demeaning language about all of its facets.
This is very confident, strident language. You clearly believe that there is a faction of people demonizing technology, akin to luddites, who are not worthy of being taken seriously.
> This one raises a lot of important points about LLMs, but...
So here you go for the rhetorical device of weighing the opposing view. Except, you don't weight it at all. You are not at all specific about what those points are. It's just a way to signal that you're being thoughtful without having to actually engage with the opposing viewpoint.
> I do think that safety is important... But I think it's better not to be a luddite.
Again, the rhetoric of moderation but not at all moderate in content.
It was a clear mistake to think that this was LLM writing. But I suspect the reason I made this mistake is that AI writing influences people to mimic surface level aspects of its style. AI writing tends to actually do the "You might say A is true, but B has some valid points, however A is ultimately correct." Your writing seems like that if you aren't reading it closely, but underneath that is a very human self-assuredness with a thin veneer of charitability.
The utilitarian argument for freedom of speech and expression in America finds its roots in the Marketplace of ideas.
Verification is frankly, the task of all our markets - to set up incentives for being right.
With no government interference in the exchange of ideas, citizens would be better able to discuss ideas, including those not popular with the establishment.
Since no one has a monopoly on truth, it would be through this competition, and fair traffic society would be better able to understand truth and thrive.
That worked, when we had newspapers that were funded, where the media landscape was not consolidated, and where we didn’t have an abundance of technology that overwhelmed our ability to verify and be informed.
Today, through entirely private forces, we can monopolize, fracture and shape the traffic in our marketplace of ideas.
Trump is very much the ideal candidate to ride the media environment. The right side of the political spectrum is simply a far more efficient at providing a wrestling style experience for its audience. Its consolidated media environment largely pays lip service to journalistic standards, and sells a coordinated set of ideas for its audience.
The Fox News effect is a case in point, and this was from the 90s.
This media model has been co-opted globally, with every party and government now providing patronage to media houses to keep them afloat, and to build their own narratives.
The citizen who engages in these media markets simply does not enter a vibrant competitive market anymore.
Broad-based alignment doesn't come from nothing, but it is surprisingly easy to achieve when a population recognizes a shared stake. A synthesis between selfishness and altruism emerges when you consider who you can call a "neighbor".
I'd strongly suggest reading his books. They profoundly changed my understanding of how human institutions and society form.
The law isn't going to be repealed because a bunch of nerds geoblocked their personal blog.
Sure. But it takes work for anything larger than a small, close-knit community. I’m pushing back on the notion that this comes naturally and is a default state. It’s not, at least not relative to people naturally forming in and out groups.
The armchair commenters are probably folks who have never organized a group of people before outside a commercial context.
No, it does not, and that's Graeber's whole point.
"Markets" are not some sort of physical law of the universe.
A simple example of this is it's the norm in hunter gatherer societies to take care of people who never will make an equal contribution back in the transactional sense.
Because the social ties in those societies are not simply transactions.
If your model fails to accurately describe empirical reality, time to improve/expand the model.
I like economics and math too, but the whole discussion of markets is a terrible starting place for deriving results in ethics/psychology. If you insist though, notice that unions will happen unless some other organization is working to prevent them. What do you suppose this means? People are aligned with each other exactly because they've noticed their coworkers are not corporations or governments.
Although the two are entangled, politics is a more relevant framing than economics here. If people weren't broadly aligned on basic stuff, then autocrats, theocrats, kleptocrats and so on would simply not be interested in dismantling democracies. They make that effort because they must.
But that shared stakeholding doesn’t naturally drive alignment. You need journalists, fiction writers, organizers and delegates. Travel and curiosity. These each take effort, resources and organization. It’s something we do well. But it isn’t spontaneous in the way small-group kinship is—it literally emerges if you put people in proximity.
Historically, we did essentially the opposite. We figured out many aspects of human ethics and psychology first, and deduced from them how and why markets work as they do.
> ... If people weren't broadly aligned on basic stuff, then autocrats, theocrats, kleptocrats and so on would simply not be interested in dismantling democracies. They make that effort because they must.
This implies that people are only weakly aligned in the first place, otherwise no such attempt at dismantling could ever succeed. That's not a very interesting claim; it does not refute the usefulness of some external mechanism to more directly foster aligned action. Markets do this with a maximum of decentralized power and a minimum of institutional mechanism.
This is not the history, it is a mythology in opposition to the empirical evidence.
Which is why you should read Graeber.
Anyhow, replying is clearly past the point of utility here.