As far as I understand, bitcoin is fundamentally unusable as a currency. Transactions are expensive and limited to ?7k? every few seconds. It's also inherently deflationary, you want inflationary currency, you want people spending, not hoarding.
The author lost me a little on the AI rant. Yes, everything and everyone is shoving LLMs into places that I don't want it. Just today Bandcamp sent me an email about upcoming summer albums that was clearly in part written by AI. You can't get away from it, it's awful. That being said, the tooling for software development is so powerful that I feel like I'd be crazy not to use it. I save so so much time with banal programming tasks by just writing up a paragraph to cursor about what I want and how I want it done.
Yes that is actually roughly the take away here. LLMs are getting so popular in programming not because they are good at solving problems but because they are good at reproducing a solution to some minor variation of an existing problem which has already been solved many times.
Most of the work that most of the industry does is just re-solving the same set of problems. This is not just because of NIH but also because code reuse is a hard problem.
This is not to say that everything is the same product. The set or problems you solve and how you chain those solutions together (the overarching architecture) as well as the small set of unique problems you solve are the real value in a product. But its often not the majority of any single codebase.
Yes. Most developers in the corporate world are building CRUD apps that are 90% boilerplate. Hopefully this helps explain the disconnect.
And even if GenAI progress stops here and it never gets better, it's incredibly useful. Why do people realize that it can't do EVERYTHING and then get stuck to the view that they can't use it for ANYTHING, confused as to why others are getting benefit from it?
Two things can be true (and I believe they are): the hype can be "staggeringly overblown" AND it can still be useful in many cases.
In the past ten years, I worked with one guy from Nigeria and a bunch of people from Iran. Bitcoin (or rather cryptocurrencies in general) have been more than just mere gimmicks.
Sending money from one side of the world to another, for such a low fees, even when the central bank of Nigeria is blocking USD transfers, even when bank transfers to Iran are being blocked, has been very useful in itself.
As for AI, yes if you use it as a God, you will be disappointed. Yes it can’t do everything. Yes it will hallucinate.
But it’s been a great learning environment for me, I keep asking questions to get an overview of things.
I used to learn concepts such as « clean architecture » because AI is never tired, it can provide endless variations to the same problems, until you better understand the underlying principles and the recurring patterns.
When I work on a project, 20-30% is creative, cutting edge, never seen before. 70% is CRUD, necessary boilerplate.
I know what I am supposed to be doing, I can verify the result, double check the validity of it.
Why would I waste hours typing letters?
Do I rely on AI for everything? No I don’t.
But pretending it’s completely useless, is nonsensical.
Yes it’s just statistically inferring the next token, but it’s actually a very simple, powerful concept.
Like the author, I'm mystified by those who accept the appearance of output as a valid goal.
Even with constrained algorithmic "AI" like Stockfish, which, unlike LLMs, actually works, chess players frown heavily on using it for cheating. No chess player can go to a tournament and say: "I made this game with Stockfish."
I can absolutely relate. That was ten years ago, so I'm not exactly sure where they are, now, but they still seem to be going strong.
[0] https://eev.ee/blog/2015/06/09/i-quit-the-tech-industry/
Oh wow.
That seems snarky ("you don't want to learn things") when the reality is that the problem, the finite ressource that is the bottleneck, is _time_.
This is also why hyperlinks are underused: few people go through every single one of the words in their last post to try to add appropriate hyperlinks.
one of these things is not like the other
I agree with the author, I usually do not care how exactly the fetch call looks like. Whatever indeed.
This drives me mad in JetBrains' products mostly because there are about 14 ways to completely disable it (including Rider, which is different to the others). There should be a single toggle for "Disable all 'smart' completions"
Monero is what Bitcoin should have been.
Values at times seem like a property of an extinct subspecies. Like being born with a tail.
But chatgpt does help me work through some really difficult mathematical equations in newest research papers by adding intermediate steps. I can easily confirm when it gets them right and when not, as I do have some idea. It’s super useful.
If you are not able to make LLMs work for you at all, and complain about them on the internet, you are an old man yelling at clouds. The blog post devolves from an insightful viewpoint into a long sad ramble.
It’s 100% fine if you don’t want to use them yourself, but complaining to others gets tired quick.
Agree with the sense that we're at a weird moment though.
> But yes, thanks: I was once offered this challenge when faced with a Ren’Py problem, so I grit my teeth and posed my question to some LLM. It confidently listed several related formatting tags that would solve my problem. One teeny tiny issue: those tags did not and had never existed. Just about anything might be plausible! It can just generate Whatever! I cannot stress enough that this is worse than useless to me.
The probabilistic machine generated a probabilistic answer. Unable to figure out a use for the probabilistic machine in two tries, I threw it into the garbage.
Unfortunately, humans are also probabilistic machines. Despite speaking English for nearly a lifetime, errors are constantly produced by my finger-based output streams. So I'm okay talking to the machine that might be wrong in addition to the human that might be wrong.
> It feels like the same attitude that happened with Bitcoin, the same smug nose-wrinkling contempt. Bitcoin is the future. It’ll replace the dollar by 2020. You’re gonna be left behind. Enjoy being poor.
I mean, you were left behind. I was left behind. I am not enjoying being poor. Most of us were left behind. If we invested in Bitcoin like it was the future in 2011 we'd all be surfing around on yachts right now given the current valuation.
Just because you failed to use an LLM effectively the first time, or it doesn't live up to your version of the hype doesn't mean you have a magic window into how sh1t they are. Many people who use them regularly know just how bad they are and what it's like to move beyond the original context window length. But having auto complete on steroids is still the best thing with regards to making computing "fun" again in a generation. No more boiler plate, no more beating your head against a desk looking for that minor coding bug that could have been fixed by a awesome regex if only you had the time to learn it. No more having to break your stride and go through about 20 websites with popups to find that someone else solved the problem got you, just getting on with stuff and having fun whilst doing it.
Edit: no retort, just flagging and down-voting... lovely
The irony of this rant next to the AI rant.
Progress is not uniformly distributed I guess.
But I disagree on LLMs being "worse than useless".
Sure, "vibe coding" an entire app from a short prompt will always give you fragile, subtly broken nonsense. *Code is the spec*. In most cases, you can't meaningfully "compress" your requirements into a short informal prompt. We need better formal languages for expressing requirements concisely and declaratively! Think: Prolog, Haskell...
LLMs are good at small tasks that you can review much quicker than doing it yourself. Something tedious, like doing some local refactoring, writing ad-hoc Bash scripts, SQL queries, FFmpeg commands. I use Bash and SQL regularly, but somehow I always have to google the exact syntax. I already use ShellCheck, by the way. It's a must, and it helps a lot when reviewing LLM output.
I like the autocomplete feature too. It often saves time when writing repetitive or obvious code. `if bad_stuff {` usually autocompletes `return Err(BadStuff)` for me. `MyStruct {` initializer usually autocompletes the list of fields for me. I know that incorrect suggestions piss off some people and make it a net-negative for them. Incorrect suggestions are common, but they don't bother me in practice.
UPDATE: I've turned this comment into a blog post. https://home.expurple.me/posts/my-take-on-llms-for-coding/
But for me the biggest issue with all this — that I don't see covered in here, or maybe just a little bit in passing — is what all of this is doing to beginners, and the learning pipeline.
> There are people I once respected who, apparently, don’t actually enjoy doing the thing. They would like to describe what they want and receive Whatever — some beige sludge that vaguely resembles it. That isn’t programming, though.
> I glimpsed someone on Twitter a few days ago, also scoffing at the idea that anyone would decide not to use the Whatever machine. I can’t remember exactly what they said, but it was something like: “I created a whole album, complete with album art, in 3.5 hours. Why wouldn’t I use the make it easier machine?”
When you're a beginner, it's totally normal to not really want to put in the hard work. You try drawing a picture, and it sucks. You try playing the guitar, and you can't even get simple notes right. Of course a machine where you can just say "a picture in the style of Pokémon, but of my cat" and get a perfect result out is much more tempting to a 12 year old kid than the prospect of having to grind for 5 years before being kind of good.
But up until now, you had no choice and to keep making crappy pictures and playing crappy songs until you actually start to develop a taste for the effort, and a few years later you find yourself actually pretty darn competent at the thing. That's a pretty virtuous cycle.
I shudder to think where we'll be if the corporate-media machine keeps hammering the message "you don't have to bother learning how to draw, drawing is hard, just get ChatGPT to draw pictures for you" to young people for years to come.
I clicked halfheartedly, started to read halfheartedly, and got sucked into a read that threw me back into the good old days of the internet.
A pity that the micropayments mentioned in the post never materialized, I'd surely throw a few bucks at the author but the only option is a subscription and I hate those.
I remember when PayPal came to Australia, I was so confused by it as I could just send money via internet banking. Then they tried to lobby the government to make our banking system worse so they could compete, much like Uber.
I have not yet figured out why anyone would choose this behaviour in a text editor. You have to press something to exit the delimited region anyway, whether that be an arrow key or the closing delimiter, so just… why did the first person even invent the idea, which just complicates things and also makes it harder to model the editor’s behaviour mentally? Were they a hunt-and-peck typist or something?
In theory, it helps keep your source valid syntax more of the time, which may help with syntax highlighting (especially of strings) and LSP/similar tooling. But it’s only more of the time: your source will still be invalid frequently, including when it gets things wrong and you have to relocate a delimiter. In practice, I don’t think it’s useful on that ground.
> I don’t want to help someone who opens with “I don’t know how to do this so I asked ChatGPT and it gave me these 200 lines but it doesn’t work”.
And I'm not the only one saying this but - the bit about LLMs is likely throwing the baby out with the bathwater. Yes the "AI-ification" of everything is horrible and people are shoehorning it into places where it's not useful. But to say that every single LLM interaction is wrong/not useful is just not true (though it might be true if you limit yourself to only freely available models!). Using LLMs effectively is a skill in itself, and not one to be underestimated. Just because you failed to get it to do something it's not well-suited to doesn't mean it can't do anything at all.
Though the conclusion (do things, make things) I do agree with anyway.
> And the only real hope I have here is that someday, maybe, Bitcoin will be a currency, and circulating money around won’t be the exclusive purview of Froot Loops. Christ
PLEASE NO. The only thing this will lead to is people who didn't get rich with this scheme funding the returns of people who bought in early.
Whatever BTC becomes, everyone who advocates for funneling public money of people who actually work for their salary into Bitcoin is a fraud.
I don't think the blog author actually wants this, but vaguely calling for Bitcoin to become "real money" indirectly will contribute to this bailout.
And yes, I'm well aware that funneling pension funds money etc into this pyramid scheme is already underway. Any politician or bank who supports this should be sued if you ask me.
You're a platform drone, you have no mind, yada. Yet, we are reading the author's blog.
The author may hate LLMs, but they will lead to many people realizing things they never were aware of, like the author's superficial ability to take information and present it in a way that engages others. Soon that will be a thing that is known. Not many will make money sharing information in prose.
What the author refers to as "LLMs" today, will continually improve and "get better" at everything the author has issues with, maybe in novel ways we can't think of at the moment.
Alternative take:
"Popular culture" has always been a "lesser" ideal of experience, and now that ontological grouping now includes the Internet, as a whole. There are no safe corners, everything you experience on the Internet, if someone shared it with you, is now "Popular culture".
Everyone knows what you know, and you are no longer special or have special things to share, because awareness is ubiquitous.
This is good for society in many ways.
For example, with information asymmetry, where assholes made others their food, it will become less common that people are food.
Things like ad-driven social networks will fade away as this realization becomes normalized.
Unfortunately, we are at the very early stages of this, and it takes a very long time for people to become aware of things like hoaxes.
You said: how many letters are in the lithuanian word "nebeprisikaspinaudamas"? Just give me one number. ChatGPT said: 23
You said: how many letters are in the lithuanian word 'nebeprisikaspinaudamas'. Just give me one number. ChatGPT said: 21
Both are incorrect by the way. It's 22
The point of doing things is the act of doing them, not the result. And if we make the result easily obtainable by using an LLM then this gets reinforced not destroyed.
I'm going to use sketching as an example, because it's something I enjoy but am very bad at. But you could talk in the same way about playing a musical instrument, writing code, writing anything really, knitting, sports, anything.
I derive inspiration from other people who can sketch really well, and I enjoy and admire their ability. But I'm happy that I will never be that good. The point of sketching (for me) is not to produce a fantastic drawing. The point is threefold: firstly to really look at the world, and secondly to practice a difficult skill, and thirdly the meditative time of being fully absorbed in a creative act.
I like that the fact that LLMs remove the false idea that the point of this is to produce Art. The LLM can almost certainly produce better Art than I can. Which is great because the point of sketching, for me, is the process not the result, and having the result be almost completely useless helps make that point. It also helps that I'm really bad at sketching, so I never want to hang the result on my wall anyway.
I understand that if you're really good at something, and take pride in the result of that, and enjoy the admiration of others at your accomplishments, then this might suck. That's gotta be tough. But if you only ever did it for the results and admiration, then maybe find something that you actually enjoy doing?
-----
This is why I absolutely cannot fucking stand creative work being referred to as "content". "Content" is how you refer to the stuff on a website when you're designing the layout and don't know what actually goes on the page yet. "Content" is how you refer to the collection of odds and ends in your car's trunk. "Content" is what marketers call the stuff that goes around the ads.
"Content"... is Whatever.
-----
People, please don't think of yourself as "content consumers".
Private Equity & Financialization: Whatever for business Flood the Zone & Deadcatting: "Whatever" for politics
It's what I think about when I hear all of the "AI is going to eliminate all the jobs." That's just a convenient cover story for "Tax laws changed so R&D isn't free money anymore, and we need to fire everyone."
When almost every drop of wealth is in the control of a tiny number of people, it's not surprising that the world turns into one big competition for ways to convince those people that you have a way for them to sop up the remaining thimbleful too.
> There are people who use these, apparently. And it just feels so… depressing. There are people I once respected who, apparently, don’t actually enjoy doing the thing. They would like to describe what they want and receive Whatever — some beige sludge that vaguely resembles it. That isn’t programming, though. That’s management, a fairly different job. I’m not interested in managing. I’m certainly not interested in managing this bizarre polite lying daydream machine. It feels like a vizier who has definitely been spending some time plotting my demise.
I was several minutes of reading before this paragraph when the idea hit me that this person hates managing. Because everyone I’ve met who hates using AI to produce software describes to me problems like the AI not being correct or lying to them if the model thought that would please you better, and that’s my experience with junior engineers as a manager.
And everyone I’ve met who loves AI at some point makes an analogy to it, that compares it to a team of eager juniors who can do a lot of work fast but can’t have their output trusted blindly, and that’s my experience with junior engineers as a manager.
And then anyone whose been trying to get an Engineering manager job over the past few months and tracking their applications metadata has seen the number of open postings for their requirements go down month after month unless you drop the manager part and keep all the same criteria but as IC
And then I read commentary from megacorps about their layoffs and read between the lines like here[1]
>… a Microsoft spokesperson said in a statement, adding that the company is reducing managerial layers …
I think our general consternation around this is coming from creators being forced into management instead of being able to outsource those tasks to their own managers.
I am not really sure what to do with this insight
[1] https://www.cnn.com/2025/07/02/tech/microsoft-layoffs-9000-e...
The reason behind banning adult materials has to do with Puritanism and with the high rates of refunds on adult websites.
> Quitting programming as a career right now because of LLMs would be like quitting carpentry as a career thanks to the invention of the table saw.
The reaction to that post has been interesting. It's mainly intended to be an argument against the LLM hype! I'm pushing back against all the people who are saying "LLMs are so incredible at programming that nobody should consider programming as a career any more" - I think that's total nonsense, like a carpenter quitting because someone invented the table saw.
Analogies like this will inevitably get people hung up on the details of the analogy though. Lots of people jumped straight to "a table saw does a single job reliably, unlike LLMs which are non-deterministic".
I picked table saws because they are actually really dangerous and can cut your thumb off if you don't know how to use them.
Then it became hip, and people would hand-roll machine-specific assembly code. Later on, it became too onerous when CPU architecture started to change faster than programmers could churn out code. So we came up with compilers, and people started coding at a higher level of abstraction. No one lamented the lost art of assembly.
Coding is just a means to an end. We’ve always searched for better and easier ways to convince the rocks to do something for us. LLMs will probably let us jump another abstraction level higher.
I too spent hours looking for the right PHP or Perl snippet in the early days to do something. My hard-earned bash-fu is mostly useless now. Am I sad about it? Nah. Writing bash always sucked, who am I kidding. Also, regex. I never learned it properly. It doesn’t appeal to me. So I’m glad these whatever machines are helping me do this grunt work.
There are sides of programming I like, and implementation isn’t one of them. Once upon a time I could care less about the binary streams ticking the CPU. Now I’m excited about the probable prospect of not having to think as much about “higher-level” code and jumping even higher.
To me, programming is more like science than art. Science doesn’t care how much profundity we find in the process. It moves on to the next thing for progress.
I'm a bit annoyed with LLMs for coding, because I care about the craft. But I understand the premise of using them when the end goal is not "tech as a craft" but "tech as a means". But that still requires having some reason to use the tech.
Hell, I feel the "tech as a means to get money" part for people trying to climb up the social ladder.
But for a lot of people who already did get to the top of it?
At some point we gotta ask what the point of SEO-optimizing everything even is.
Like, is the end goal optimizing life out of life?
Why not write a whole app using LLMs? Why not have the LLM do your course work? Why do the course work at all? Why not have the LLM make a birthday card for your partner? Why even get up in the morning? Why not just go leave and live in a forest? Why live at all?
What is even the point?
Most of Internet is crap. Most of media is crap. This does need to stop you (or me) from creating.
Same with AI. I'm notably more autistic (or more aspie, or whatever) than my friend group, and also I much more easily recognize AI text and images as uncanny slop, while my friends are more easily wowed by it. Maybe AI output has the same "superficially impressive but empty inside" quality as the stuff that sociopaths say.
The last line of the article summarizes it perfectly.:
> Do things. Make things. And then put them on your website so I can see them.
I subscribe fully to the first two sentences, but the last one is bullshit. The gloom in the article is born from the authors attaching the value of "making things" to the recognition received for the effort. Put your stuff out there if you think it is of value to someone else. If it is, cool, and if it's not, well, who cares.
> I can’t remember exactly what they said, but it was something like: “I created a whole album, complete with album art, in 3.5 hours. Why wouldn’t I use the make it easier machine?” This is kind of darkly fascinating to me, because it gives rise to such an obvious question: if anyone can do that, then why listen to your music? It takes a significant chunk of 3.5 hours just to listen to an album, so how much manual work was even done here? Apparently I can just go generate an endless stream of stuff of the same quality! Why would I want your particular brand of Whatever?
This gem implies that the value of the music (or art in general) is partially or even wholly dependent on whether or not someone else thinks it's good. I can't even...
If you eliminate the back-patting requirements, and the stuff we make is genuine, then it's value is intrinsic. The "Whatever" machines are just tools, like the rest of the tools we use, to make things. So, just make your things and get on with it.
Tech never ever prevents people who really want to hone their skills from doing so. World record of 100m sprint kept improving even since car was invented. World record of how many digits of pi memorized kept improving even when a computer does that indefinitely times better.
It's ridiculous to think drawing will become a lost art because of LLM/Diffusal models when we live in a reality where powerlifting is a thing.
hmmm
1. Computers could be more fun if you could buy things more easily. Crypto could solve that, but the people involved don't care about crypto, they care about Whatever.
2. Computers can be fun when people do cool things on them. The web could make that possible, but unfortunately the economics of ads incentivize people to post Whatever.
3. Programming can be fun, but LLMs just generate Whatever.
Fair point; I think this feeling is exacerbated by all the social media being full of people looking like they're good at what they do already, but it rarely shows the years of work they put in beforehand. But that's not new, compare with athletes, famous people, fictional characters, etc. There's just more of it and it's on a constant feed.
It does feel like people will just stop trying though. And when there's a shortcut in the form of an LLM, that's easy. I've used ChatGPT to write silly stories or poems a few times; I look at it and think "you know, if I were to sit down with it proper I could've written that myself". But that'd be a time and effort investment, and for a quick gag that will be pushed down the Discord chat within a few minutes anyway, it's not worth it.
Before that, I had an TI-99 4A at home without a tape drive and the family tv as a display. I mainly was into creating games for my friends. I did all my programming on paper, as the "screen time" needed to be maximized for actually playing the games after typing it in from the paper notebook. Believe it or not, but bugs were very rare.
Much later at uni there were computer rooms with Mac's with a floppy drive. You could actually just program at the keyboard, and the IDE even had a debugger!
I remember observing my fellow students endlessly type-run-bug-repeat until it "worked" and thinking "these guys never learned to reason through their program before running it. This is just trial and error. Beginners should start on paper".
Fortunately I immediately caught myself and thought, no, this is genuine progress. Those that "abuse" it would more than likely not have programmed 'ye old way' anyways, and some others will genuinely become very good regardless.
A second thing: in the early home computer year(s) you had to program. The computer just booted into the (most often BASIC) prompt, and there was no network or packaged software. So anyone that got a computer programmed.
Pretty soon, with systems like the Vic-20, C64 and ZX Spectrum there was a huge market in off the shelf game cassettes. These systems became hugely popular because they allowed anyone to play games at home without learning to program. So only those that liked programming did. Did that lose beginner programmers? Maybe some, for sure.
This should be comparable to how much fewer people in the west today know how to work a farm or build machinery. Each technological shift comes at a cost of population competence.
I do have a feeling that this time it could be different. Because this shift has this meta-quality to it. It has never been easier to acquire, at least theoretical, knowledge. But the incentives for learning are shifting in strange directions.
But personally, I don't feel as upset over all this as he does. It seems that all my tech curmudgeonliness over the years is paying off these days, in spades.
Humbly, I suggest that he and many others simply need to disconnect more from The Current Thing or The Popular Thing.
Let's look at what he complains about:
* Bitcoin. Massive hype, never went anywhere. He's totally right. That's why I never used it and barely even read about it. I have no complaints because I don't care. I told myself I'd care if someone ever built something useful with Bitcoin. 10 years later they haven't. I'm going back to bed.
* Windows. Man I'm glad I dodged that bullet and became a Linux user almost 15 years ago. Just do it. Stuff will irk you either way but Linux irks don't make you feel like your dignity as a human being is being violated. Again, he's right that Windows sucks; I just don't have to care, because I walked away.
* Bluesky, Twitter, various dumb things being said on social media. Those bother him too. Fortunately, these products are optional. I haven't logged into my Twitter account for three years. I'll certainly never create a Bluesky one. On some of my devices I straight up block many of these crapo social sites like Reddit etc. in /etc/hosts. I follow some RSS feeds of a few blogs, one of the local timeline for a Mastodon instance. Takes ten minutes and then I go READ BOOKS in my spare time. That's it. He is yet again right, social media sucks, it's the place where you hear about all this dumb stuff like Bitcoin; I just am not reading it.
I'm not trying to toot my own horn here it's just that when you disconnect from all the trash, you never look back, and the frustrations of people who haven't seem a little silly. You can just turn all of this stuff off. Why don't you? Is it an addiction? Treat it like one if so. I used to spend 6 hours a day on my phone and now it's 1 hour, mainly at lunch, because the rest of the time it's on silent, in a bag, or turned off, just like a meth addict trying to quit shouldn't leave meth lying around.
Listen to Stallman. Listen to Doctorow. These guys are right. They were always right. The free alternatives that respect you exist. Just make the leap and use them.
Maybe it's like a pale imitation of structural editing? There are editing modes for some editors that more or less ensure the syntax is always valid as you edit, and they of course include this feature.
In the same vein, I've actually worked on crypto projects in both DeFi and NFT spaces, and agree with the "money for criminals" joke assessment of crypto, even if the technology is quite fascinating.
And thank fuck it happened. All of shell and obscure Unix tools that require brains molded in 80s to use on a day to day basis should’ve been superseded by something user friendly long time ago.
We live in a world now where people scare one another into making significant choices with limited information. Person A claims it's the future you don't want to miss, Person B takes that at face value and starts figuring out how to get in on the scam, and Person C looks at A and B and says "me too." Rinse and repeat.
That's why so much of the AI world is just the same app with a different name. I'd imagine a high percentage of the people involved in these projects don't really care about what they're working on, just that it promises to score them more money and influence (or so they think).
So in a way, for the majority, it is just stupid and greedy behavior, but perhaps less conscious.
I have a feeling that line of thinking is going to be of diminishing consolation as the world veers further into systemic and environmental collapse.
But yeah, first we'll go through a few (?) years of the self-defeating "ChatGPT does my homework" and the necessary adjustments of how schools/unis function.
I had an interesting discussion with a piano teacher once. Some of his students, he told me, would play for themselves but never for any kind of audience. As the saying goes: if a musician plays a piano in a closed room with no one to hear it, does it make a sound?
Obviously there's nothing wrong with extremely personal art that never gets released to the wider public - not every personal diary should be a blog. But there's also the question of what happens to art when none of it gets shared around, and vibrant art communities are, in my opinion (and I think also the author's), something to encourage.
AI at the current state in my workflow is a decent search engine and stackoverflow. But it has far greater pitfalls as OP pointed out (it just assumes the code is always 100% accurate and will “fake” API).
Can't speak to other editors though.. I don't want to sound like I'm trolling, but they generally feel quite clunky, compared to Emacs (ducks, runs ;p )
Great protocols are built in layers.
You have decentralized instant settlement for an average of 0.005% even for micropayments with the Lightning Network (another protocol built on top of Bitcoin). That's orders of magnitude away from the settlement time and resilience of the current payment networks.
Immutable distributed ledgers, by contrast, have found no use cases other than crime and financial speculation in coming up on twenty years. Exactly how long do we have to wait for these interesting uses that are “surely” coming?
In terms of the artist being accessible to overseas fans it's a great improvement, but I do wonder if I had grown up with this, would I have had any motivation to learn?
Edit: I apologies, the author has pre-gpt posts that use em dashes so likely it’s part of their writing style.
It's killing the accumulative and progressive way of learning that rewards who tries and fail many times before getting it right.
The "learning" is effectively starting to being killed.
I just wonder what would happen to a person after many years using "AI" and suddenly not having access to it. My guess is that you become useless and with a highly diminished capacity to perform even the most basic things by yourself.
This is one of many reasons why I'm so against all the hype that's going on in the "AI" space.
I keep doing things the old school way because I fully comprehend the value of reading real books, trying, failing and repeating the process again and again. There's no other way to truly learn anything.
Does this generation understand the value of it? Will the next one?
The only silver lining I can see is that a new perspective may be forced on how well or badly we’ve facilitated learning, usability, generally navigating pain points and maybe even all the dusty presumptions around the education / vocational / professional-development pipeline.
Before, demand for employment/salary pushed people through. Now, if actual and reliable understanding, expertise and quality is desirable, maybe paying attention to how well the broader system cultivates and can harness these attributes can be of value.
Intuitively though, my feeling is that we’re in some cultural turbulence, likely of a truly historical magnitude, in which nothing can be taken for granted and some “battles” were likely lost long ago when we started down this modern-computing path.
With the WWW we thought everyone having access to all information would enlighten them, but without knowledge people do not recognize the right information, and are more likely to trust (mis)information that they think they understand.
What if LLMs give us all the answers that we need to solve all problems, but we are too uninformed and unskilled to recognize these answers? People will turn away from AI, and return to information that they can understand and trust, even if it's false.
Anyway, nothing new actually, we've seen this with science for some time now. It's too advanced for most people to understand and validate, so people distrust it and turn to other sources of information.
This is not rhetorical - if you (try to) make the life of literally millions of people worse in a small way, the amount of time and money wasted by them should be calculated and used to determine the punishment.
How is it ever wrong, though? If I insert a (, and then a {, and the editor appends so that it's ({}), that's always ?correct. Can it ever not be.
Maybe because on a Norwegian keyboard { is a bit awkward, but I like it. Then even if we're 5 levels deep with useEffect(() => {(({[{[ I can just press ctrl+shift+enter and it just magically finishes up everything and put my caret at the correct place, instead of me trying to write ]}]})) in the correct order.
But I agree that in normal input it is often annoying.
This feature is useful for me. So are LLMs. If someone doesn't want to use this or that, they are not obliged to. But don't tell me that features that I find useful "suck".
No one is becoming a retard omniscient using LLMs and anyone saying they are is lying and pushing a narrative.
Humans still correct things, humans understand systems have flaws, and they can utilize them and correct them.
This is like saying someone used Word's grammar correction feature and accepted all the corrections. It doesn't make sense, and the people pushing the narrative are disingenuous.
The skill has not been obliterated. We still need to fix the slop written by the LLMs, but it is not that bad.
Some people copy and paste snippets of code without knowing what it does, and in a sense, they spread technical debt around.
LLMs lower the technical debt spread by the clueless, to a lower baseline.
The issue I see is that the amount of code having this level of technical debt is created at a much faster speed now.
I think it is defense mechanism, you see it everywhere, and you have to wonder, "why are people thinking this way?".
I think those with an ethical or related argument deserve to be heard, but opposite of that, it seems like full blinders, ignoring the reality presented before us.
For art/craft you are completely correct though.
Yet, the payment processors will all reliably treat anything NSFW equally by suppressing it as much as they can. From banning individuals who dare do transactions they don't approve of to directly pressuring websites that might tolerate NSFW content by threatening to take away their only means of making money. If they only cared about refunds and profitability, they wouldn't ban individual artists - because the fact how these artists often manage to stay undetected for years suggests that many of their customers aren't the kind to start complaining.
It's quite fascinating how this is the one area where the companies are willing to "self-regulate". They don't process sales of illicit drugs because the governments above them said no and put in extensive guardrails to make these illegal uses as difficult as reasonably possible. Yet, despite most first-world governments not taking issue with adult content at large (for now), the payment processors will act on their own and diligently turn away any potential revenue they could be collecting.
Being good at coming up with ideas, at critically reading something, at synthesizing research, at writing and editing, are all things that take years to learn. This is not the same as learning the mechanics that a calculator does for you.
Well you sure showed them.
The TFA makes a very concrete point about how the Whatever machine is categorically different from a calculator or a handsaw. A calculator doesn't sometimes hallucinate a wrong result. A saw doesn't sometimes cut wavy lines instead of straight lines. They are predictable and learnable tools. I don't see anyone addressing this criticism, only straw manning.
Obesity rates keep "improving" since the car was invented, up to becoming a major public health crisis and the main amplifier of complications and mortality when the pandemic stroke.
Oh, and the 100m sprint world record has been set for more than a decade and a half now, which means either we reached human optimum, or progress on anti-doping technology has forced a regression on performance.
There are clear differences. First of a calculator and microwave are quite different, but so is LLM. Both are time savers, in the sense of microwave saves time defrosting and calculator saves time calculating vs human.
They save time to achieve a goal. However calculators come with a penalty, by making multiplication easier they make user worse at it.
LLMs are like calculators but worse. They both are effort savers, and thus come with a huge learning penalty and unprecise enough that you need to learn to know better than them.
A very good example! (...although probably not how you think it is ;)
Indeed the world record is achieved by a very limited number of people under stringent conditions.
Meanwhile people by and large† take their cars to go to the bakery which by foot would be 10min away, to disastrous effect on their health.
And by "cars" I mean "technology", which, while a fantastic enabler of things impossible before, has turned more people into couch potatoes than athletes.
† Comparatively to world record holders.
And also, how is personalized bullshit better than generic bullshit? We'd need to solve the bullshit problem in the first place, which is mathematically guaranteed NOT to be possible with these types of architectures.
I get what you're after, but that's not a very good example. If a musician is playing an instrument, then of course the musician hears it.
Now, imagine instead that it's a player piano, and the lone "musician" is not actually playing anything at all, but hears the sound of the tones he/she had randomly generated by a "Whatever" machine, resonating through the actual struck strings, and resonant body of a piano, and the hair on the back of their neck stands on end. Then the music ends, the vibrations stop, and all that is left of the moment is whatever memory the "musician" retains.
Was that music while being heard by the "musician"? Is it music when it's just an melody in the "musician's" head? What if it's wasn't a piano at all, but just birds singing? Is it still music? If it is, is it "good" music?
Yes, the world is changing fast, and no, we humans don't seem to handle it well. I agree with the article in that sense. But I see no use in categorizing technology as dystopian, just because it's been misused. You don't have to misuse it yourself, or even use it at all if you don't want to. Complaining about it though... we humans are great at that.
It's just matching, and reflecting the way different humans think, and reason, that's all.
(yes, said in jest)
Most likely yet another flawed output from human-LLM (4chan) so online schizos have something to identify themselves with.
A third of the world is unbanked. A permissionless monetary system makes a huge difference for those.
When I was still very skeptical about Bitcoin, I met a guy in Turkey who was from a very poor African country and was just studying there. His father would buy Bitcoin in their home country with the local currency (P2P) and send it to his son, that would then convert it also P2P for Turkish Liras. They could do this securely an within minutes. The alternative was using Western Union and paying taxes in both countries, which in total added up to ~50% of the sent amount.
It's great not needing Bitcoin, as it is great not needing Tor. But that doesn't mean there's no use case for them.
Color me naive but honestly I'm pretty sure it's not, though. We all tend to be worse at distinguishing AI from human text than we think, but the text sounds genuine to me and transpires an author with a quirky personality that seems difficult to imitate by an LLM. And that could include using em dashes.
https://www.antipope.org/charlie/blog-static/fiction/acceler...
Also to the thief that tries to wear it.
Set a few years from now.
I'd really like to know why tubgirl spam raids are bad but unicorn weiners are not; I don't really see much difference between the two, TBH.
- there are zero digs of any kind about sexuality in this piece
- the only reference in the text I can possibly find that someone might have considered a "micro-dig at people who are white" is
> This is the driving force behind clickbait, behind thumbnails of white guys making 8O faces, behind red arrows, behind video essayists who just read Wikipedia at you three times a week like clockwork, [...]
To me this feels more like it's identifying a specific thing than a "micro-dig", but opinions may differ.
Whenever you edit something existing that already has the ), ] or } further down and you end up with a ()), []] or {}}. Or when you select some text that you want to replace and start with a quote only to end up with "the text you wanted to replace" instead of the expected ".
I never notice when it works but get annoyed every time it doesn't, so I feel like it never works and always sucks.
I guess it's muscle memory and some people are used to it, but it feels fundamentally wrong to me to have the editor do different basic editing things based on which character is being pressed.
I think here you are talking about a different thing -- completion of already started parentheses/"/whatever with content in-between, not the pre-application of paired braces or quotation marks, as the author did, no?
e.g.:
(a + b > c) -> ((a + b > c) -> (()a + b > c) -> no, I was aiming for ((a + b) > c)
(it sound like you're talking about a different feature/implementation, though, since in the annoying case there's no 'completion' shortcut, it just appears)
You literally enter an IBAN and the transfer will appear in the other account the next day. And if you need the money in the target account immediately (within 10 seconds) you can do it, too, by checking a checkbox for a small fee and that fee will drop to ZERO across the EU in October 2025.
If you have to use multiple keyboards, arrows, end, home etc tend to be at different position on the keyboard. Almost no better than using a mouse.
That's were old school vi / emacs shine. CTRL? Always same area, so ctrl-f to go forward? Same gesture whatever brand of a laptop I have to work on.
You can perhaps imagine an editor that only inserts the delimiter if you type the start-string symbol in the middle of a line.
You can always insert the second " as a ghost(?) character to keep syntax highlighting working. But it's not like any modern language server really struggles with this anyways.
He wants normal banking and money transfer... but just to anybody, and for any reason. As an example, he'd like people to be able to pay him to draw bespoke furry porn for them. Or as another example, why can't a US citizen pay an Iranian citizen to do some work for them? (e.g. write a computer program)
That is totally possible. The only thing that stands in his way, and drives him into the arms of the cryptocurrency frauds, are moralising and realpolitiking governments that intentionally use their control of banks to control what bank customers can do with their money.
In an ideal world, government would only regulate banks on fiscal propriety and fair-dealing, and would not get in the way of consenting adults exchanging money for goods and services. But because government does fuck with banks, and sometimes the banks just do the fuckery anyway and government doesn't compel them to offer services to all (e.g. Visa/Mastercard refuse to allow porn merchants?), normal people start listening to the libertarians, the sovereign citizens, and the pump-and-dump fraudsters hyping cryptocurrencies.
He wants decentralised digital cash. How can it be done, if not Bitcoin et al?
If you were to create a decentralized and limited supply currency, how would you distribute it so that it's “fair”?
Sounds a bit like if the world was running only on proprietary software created by Microsoft and you criticized the move to open source because that would enrich Linus Torvalds and other code creators/early adopters.
Are people better off by continuing to use centralized broken software that they have to pay a subscription for (inflation) than if they did a lump sum buy of a GNU/Linux distro copy from a random guy and become liberated for the rest of their life?
I've found that they get pretty wishy-washy when you correct them. As an example, yesterday I was working on porting a function from the open-source CUE4Parse project from C# to Python (for a hobby project), and the LLM (Gemini 2.5 Pro) suggested the following translation of a C# method:
@staticmethod
def from_name(name: str) -> 'FPackageId':
# C# Encoding.Unicode is UTF-16 LE in Python
lower_name_bytes = name.lower().encode('utf-16-le')
hashed_id = CityHash64(lower_name_bytes)
return FPackageId(hashed_id)
I noted that the original used a custom ToLower() implementation:> This custom ToLower(), does that produce the same result as .lower() in Python?
Gemini answered with a lot of text and concluded: "You should use Python's standard lower() method for your port."
I pushed back with:
> Right, but for this to work (looking up an asset by its hash as contained in global.utoc), I probably have to match the behavior of Unreal Engine...
Gemini: You are absolutely right. My apologies, your insight is spot on. For the purpose of matching hashes that are pre-calculated by Unreal Engine and stored in files like `global.utoc`, you **must** replicate the engine's behavior exactly. My previous advice to use Python's `.lower()` was incorrect in this specific context.
You need to implement a function that behaves identically to the C# `ToLower` you highlighted. The engine uses this specific, non-Unicode-aware, ASCII-only lowercasing function before hashing the path to generate the `FPackageId`. Using Python's standard `.lower()` would produce a different byte sequence for any path containing non-English characters, leading to a hash mismatch.
From my experience this is representative of the typical LLM interaction once one ventures into niche topics like Unreal Engine modding.But that's not what the marketing says. The marketing says it will do your entire job for you.
In reality, it will save you some typing if you already know what to do.
On HN at least, where most people are startup/hustle culture and experts in something, they don't think long term enough to see the consequences for non experts.
You were not, as is patently obvious from the sentence preceding your quote (emphasis mine):
> Another Bluesky quip I saw earlier today, and the reason I picked up writing this post (which I’d started last week)
The post had already been started, your comment was simply a reason to continue writing it at that point in time. Had your comment not existed, this post would probably still have been finished (though perhaps at a later date).
> It's mainly intended to be an argument against the LLM hype! I'm pushing back against all the people who are saying "LLMs are so incredible at programming that nobody should consider programming as a career any more" - I think that's total nonsense, like a carpenter quitting because someone invented the table saw.
Despite your restating, your point still reads to me as the opposite as what you claim to have intended. Inventing the table saw is a poor analogy because the problem with the LLM hype has nothing to do with their invention. It’s the grifts and the irresponsible shoving of it down everyone’s throats that’s a problem. That’s why the comparison fails, you’re juxtaposing things which aren’t even slightly related. The invention of a technology and the hype around it are two entirely orthogonal matters.
People are talking about the trendline, what AI was 5 years ago versus what AI is today points to a different AI 5 years down the line. Whatever AI will be 5 years from now it is immensely possible that LLMs may eliminate programming as a career. If not 5 years... give it 10. If not 10, give it 15. Maybe it happens in a day, a major break through in AI, or maybe it will be like what's currently happening, slow erosion and infiltration into our daily tasks where it takes on more and more responsibilities until one day, it's doing everything.
I mean do I even have to state the above? We all know it. What's baffling to me is how I get people saying shit like this:
>"LLMs are so incredible at programming that nobody should consider programming as a career any more" - I think that's total nonsense, like a carpenter quitting because someone invented the table saw.
I mean it's an obvious complete misrepresentation. People are talking about the future. Not the status quo and we ALL know this yet we still make comments like that.
As ever, the standard defence of LLM and all gen AI tech rests on this reduction of complex subjectivity to something close to objectivity: the picture looks like other pictures, therefore it is a good picture. The sentence looks plausibly like other sentences, therefore it is a good sentence. That this argument is so pervasive tells me only that the audience for 'creative work' is already so inundated with depthless trash, that they can no longer tell the difference between painting and powerlifting.
It is not the artists who are primarily at risk here, but the audience for their work. Artists will continue to disappear for the same reason they always have: because their prospective audience does not understand them.
I would not buy a calculator that hallucinated wrong answers part of the time. Or a microwave oven that told you it grilled the chicken but it didn't and you have to die from Salmonella poisoning.
In two years, that won't be the case.
Its the same for virtually all other Arts based job. An economy that currently support say 100% of the people now, will at most be able to support 10-30% in a few years time.
> It's ridiculous to think drawing will become a lost art because of LLM/Diffusal
Map reading is pretty much a dead art now (as someone who leads hikes, I've seen it first hand)
Memorising books/oral history is also a long dead art.
Oral story telling is also a dead art, as is folk music, compared to its peak.
Sure _rich_ people will be able to do all the arts they want. Everyone else won't
No it's not (like OP's article says). With a calculator you punch in 10 + 9 and get 2 immediately, and this was 50+ years ago. With an LLM you type in "what is 10 + 9" and get three paragraphs of text after a few seconds. (this is false, I just tried it and the response is "10 + 9 = 19" but I'm exaggerating for dramatic effect). With a microwave you yeet in food and press a button and stuff happens the same way, every time.
Sure, if you abstract it to "doing things in an easier and lazier way", LLMs are just the next step, like IDEs with built in error checking and code generation were since 20 years ago. But it's more vague than press button to do a thing.
IMO, because it's good in a way or another. I'm not reading your writing because I imagine you toiled over every word of it, but simply because I started reading and it seemed worthwhile to read the rest.
LLMs will accelerate the pace of this assimilation. New trends and new things will become popular and generic so fast that we'll have to get really inventive to stay ahead of the curve.
“list them”
“oh I can do that in this other convoluted way that doesnt solve any of these users goals or problems”
“I’m not the target audience for that so it doesnt count”
“ah so financial speculation, that doesnt count despite being the largest application and sector on the planet”
“marketcap doesnt matter and isnt indicative of anything in that economy, I would rather hold digital assets at a separate different standard than every other asset on the planet out of total ignorance that my same arguments apply to asset classes that I respect”
“see I proved my point for myself, there is no use case after 17 years, classic HN”
“those are strawman arguments despite all conversations following this same predictable path enough for any language model to regurgitate it verbatim”
Touch grass all by myself?
For a specific example, when 2 grammar points seem to mean the same thing, teachers here in Japan would either not explain the difference, or make a confusing explanation in Japanese.
It's still private-ish/only for myself, but I generated all of this with LLMs and using it to learn (I'm around N4~N3) :
- Grammar: https://practice.cards/grammar
- Stories, with audio (takes a bit to load): https://practice.cards/stories
It's true though that you still need the motivation, but there are 2 sides of AI here and just wanted to give the other side.
I don't know, I feel like one of the worst things to do on the internet is to miscommunicate. And that can be hard enough with 2 native speakers (I just had a response completly blow up on me because they assumedly took my comment the wrong way).
>but I do wonder if I had grown up with this, would I have had any motivation to learn?
Machine translation has come a long way where I can at least get the general feel of say, a translated article. It is still far from the point where I feel like I can read a machine translated article and not have it come off as clunky. That last 10% or so is just uncanny enough for it to impact my reading experience.
At any point of progress in history you can look backwards and forwards and the world is different.
Before tractors a man with an ox could plough x field in y time. After tractors he can plough much larger areas. The nature of farming changes. (Fewer people needed to farm more land. )
The car arrives, horses leave. Computers arrive, the typing pool goes away. Typing was a skill, now everyone does it and spell checkers hide imperfections.
So yeah LLMs make "drawing easier". Which means just that. Is that good or bad? Well I can't draw the old fashioned way so for me, good.
Cooking used to be hard. Today cooking is easy, and very accessible. More importantly good food (cooked at home or elsewhere) is accessible to a much higher % of the population. Preparing the evening meal no longer starts with "pluck 2 chickens" and grinding a kilo of dried corn.
So yeah, LLMs are here. And yes things will change. Some old jobs will become obsolete. Some new ones will appear. This is normal, it's been happening forever.
Edit: Do you mean that the speed of the transfers was the problem?
Maybe this is just an XKCD moment https://xkcd.com/1172/ ...
I'm nowhere near a hiring position but if I was I'd add assessing that to the application procedure.
It feels like this is part of a set of growing issues, with millennials being the only generation in between gen X / boomers and gen Z that have computer skills and can do things like manage files or read a whole paragraph of text without a computer generated voice + RSVP [0] + Subway Surfers gameplay in the background.
But it was also millennials that identified their own quickly diminishing attention spans, during the rise of Twitter, Youtube, Netflix and the like [1].
I want to believe all of this is giving me some job security at least.
[0] https://en.wikipedia.org/wiki/Rapid_serial_visual_presentati...
[1] https://randsinrepose.com/archives/nadd/ (originally published 2003, updated over time to reference newer trends)
it really isn't. Calculators don't steal from acuators. Actuators and calculators grabbed from nature. Microwaves didn't take from the fireplace.
People really want to pretend that LLM's aren't just storing massive amounts of other people's data with reckless abandon and pretend instead that it's like the car to a horse. Cars didn't need literal horsepower.
>It's ridiculous to think drawing will become a lost art because of LLM/Diffusal models when we live in a reality where powerlifting is a thing.
I agree that athletes are the last thing to be replaced by our robot overlords. If there's one cheatcode to the human element, it's charisma. And people bouncing balls real good is a trillion dollar industry that warps out entire civlization, from forming hobbies, to shaping what we find as "attractive".
That's more of a cultural thing than a practicality thing, though. People respect athletes. They do not apparently respect artists nor engineers, let alone blue collar work that turns out to be way more expensive to automate.
That’s a nice description, to be honest.
I'm SO stealing this!! <3
Also, more importantly, they will ignore negative directives. So telling it: "don't do X" , will get ignored. You are better of using positive directives instead.
But, to make a comparison here with Claude Code, I was initially impressed with Geminis ability to hold a keep a conversation on track, but it rarely gets the hint when I express annoyance with its output. Claude has an uncanny ability to guess what I find wrong with its output (even when I just respond with WTF!) and will try to fix it often in actually useful ways, Gemini just keeps repeating its last output after acknowledging my annoyance.
Momentum. They are the big games in town because so many people use them, so many people use them because they are the big games in town. There was a time for both when they didn't suck as much as they do now, at least relative to what other options existed.
Using LLMs as part of my process helps me understand how much of my job isn't just bashing out code.
My job is to identify problems that can be solved with code, then solve them, then verify that the solution works and has actually addressed the problem.
An even more advanced LLM may eventually be able to completely handle the middle piece. It can help with the first and last pieces, but only when operated by someone who understands both the problems to be solved and how to interact with the LLM to help solve them.
No matter how good these things get, they will still need someone to find problems for them to solve, define those problems and confirm that they are solved. That's a job - one that other humans will be happy to outsource to an expert practitioner.
It's also about 80% of what I do as a software developer already.
The copy-paste of usable code snippets is somewhat comparable to any use of a library or framework in the sense that there's an element of not understanding what the entire thing is doing or at least how, and so every time this is done it adds to the knowledge debt, a borrowing of time, energy and understanding needed to come up with the thing being used.
By itself this isn't a problem and realistically it's impossible to avoid, and in a lot of cases you may never get to the point where you have to pay this back. But there's also a limit on the rate of debt accumulation which is how fast you can pull in libraries, code snippets and other abstractions, and as you said LLMs ability to just produce text at a superhuman rate potentially serves to _rapidly_ increase the rate of knowledge debt accumulation.
If debt as an economic force is seen as something that can stimulate short-term growth then there must be an equivalent for knowledge debt, a short-term increase in the ability of a person to create a _thing_ while trading off the long-term understanding of it.
Yeah? What about what LLMs help with? Do you have no code that could use translation (move code that looks like this to code that looks like that)? LLMs are real good with that, and they save dozens of hours on single sentence prompt tasks, even if you have to review them.
Or is it all bad? I have made $10ks this year alone on what LLMs do, for $10s of dollars of input, but I must understand what I am doing wrong.
Or do you mean, if you are a man with a very big gun, you must understand what that gun can do before you pull the trigger? Can only the trained can pull the trigger?
Why is that? You can just buy 0.00000001 BTC.
Also, I'm not sure if a radical lack of regulation / full decentralization is a good thing when we are talking about money.
In my opinion, money should be regulated by governments.
But this discussion tends to escalate and the arguments have been made ad nauseam, so I'm tuning out here, sorry.
This goes too far in the other direction. LLMs can do far more than merely saving you typing. I have successfully used coding agents to implement code which at the outset I had no business writing as it was far outside my domain expertise. By the end I'd gained enough understanding to be able to review the output and guide the LLM towards a correct solution, far faster than the weeks or months it would have taken to acquire enough background info to make an attempt at coding it myself.
> Looks like I was the inspiration for this post then
I replace that with:
> Looks like I was the inspiration for finishing this post then
And this:
> Quitting programming as a career right now because of LLMs would be like quitting carpentry as a career thanks to the invention of the table saw.
I can rephrase as:
> Quitting programming as a career right now because of LLMs would be like quitting carpentry as a career thanks to the introduction of the table saw.
Legitimately I think you are missing my point. What I quoted out of your response could be applied to prompt engineering/managment/tinkering. I think everyone who likes doing this with juniors and hates it with AI is conflating their enjoyment of teaching juniors with the dopamine you get from engaging with other primates.
I think most people I’ve met who hated AI would have the same level of hate for a situation where their boss made them actually manage an underperforming employee instead of letting them continue on as is ad infinitum.
It’s hard work both mentally and emotionally to correct an independent agent well enough to improve their behavior but not strongly enough to break them, and I think most AI haters are choking on this fact.
I’m saying that from the position of an engineer who got into management and choked on the fact that sometimes upper leadership was right and the employee complaining to me about the “stupid rules” or trying to lie to me to get a gold star instead of a bronze one was the agent in the system who was actually at fault
Through no fault of their own, but they're literally blind. They don't have eyes to see, ears to hear or fingers to touch and feel & have no clue if what they've produced is any good to the original purpose. They are still only (amazing) tools.
Obviously if one product hallucinated and one doesn't it's a no brainer (cough Intel FPUs). But in a world where the only calculators were available hallucinated at the 0.5% level you'd probably have one in your pocket still.
And obviously if the calculator hallucinated at the 90% of the time for a task which could otherwise be automated you'd just use that approach.
For example, I have no knowledge of film editing or what “works” in a sequence, but if I wanted to I could create something more than passable with AI.
I miss when rich people spent their time appealing to other rich people with art instead of screwing over all of society.
Well, they always did the latter. But they had their circuses as well. Now it's just cold, hard corporatism.
Or, to use a different metaphor, these comments are mentally nutritional Doritos, not a nicely prepared restaurant meal. If your restaurant only serves Dorito-level food, I won't go there even if I do consume chips quite often at home.
I only use AI for small problems rather than let it orchestrate entire files.
Ethernet does not need to carry the whole movie in one packet. If it does the job of delivering the MTU to the host on the other side of the cable, it's good. Websockets can be figured out somewhere else. The IP stack is not shit because each layer does just one thing, it's good because of that.
But hey, time will tell.
You don't want more technical debt.
Ideally, you want zero technical debt.
In practice only a hello world program has zero technical debt.
Take this snippet of code, and this is what each part means, and how you can change it.
It doesn't explain how it is implemented, but it explains the syntax and the semantics of it, and that's enough.
Good documentation makes all the difference, at least for me.
Technological progress makes the world deflationary. Your money should be able to buy more every time as we improve the productive efficiency of everything. And for poor countries, the best thing they could get is a censor resistant and value preserving tool.
Even if there was a tail emission, newer generations wouldn't have the capital needed for mining rigs. That's not just something unique to this case, same happens with stocks, real state or any other investment asset.
I'm not sure it's a lot of value. It probably is in the short term, but in the long run...
There have already been studies saying that you don't retain the info about what a LLM does for you. Even if you are already an expert (a status which you have attained the traditional way), that cuts you off from all those tiny improvements that happen every day without noticing.
If you want a compromose, invest in an air fryer. It has the opposite problem, so cook 80% in an air fryer and let the microwave finish the center.
Quite literally laughed out loud at this. I still cannot believe this is a thing people actually do; I thought it was a joke "genre" at first.
It's good enough, it gets the job done and does not cause enough friction for the person to feel the need to improve. Complacency.
In my perceived experience, every time a delimiter is opened, it automatically closes, allowing you to move away from it without thinking.
Even in places where this is not available (Slack, comment boxes, etc.), I close the delimiter as soon as I open it
We teach our kids about microwave oven safety for this reason.
(thanks Rory Sutherland for this analogy)
Why would someone buy a plate off her, when they could get one from IKEA for 1.50 eur?
Yet ceramics is not a dead art. Beats me?
Your calculator is broken.
> With an LLM you type in "what is 10 + 9" and get three paragraphs of text after a few seconds. (this is false, I just tried it and the response is "10 + 9 = 19" but I'm exaggerating for dramatic effect).
So you’re arguing against a strawman?
>”list them”
If you are arguing that something exists, then being asked to prove its existence is table stakes, not poor arguments
You starting with a strong argument in your list of bad arguments and then ending with shit that mocks anyone calling you out makes me believe that you are not discussing this topic in good faith
If we don't have something better to do we'll all be at home doing nothing. We all need jobs to afford living, and already today many have bullshit jobs. Are we going to a world where 99.9% of the people need a bullshit job just to survive?
Let's say me and my friends agree to carve off 0.00002 BTC supply and pretend that is the whole world of currency. We could run a whole country using that 0.00002 BTC as money. Except that anyone who has 1 BTC can break into our walled garden and, with a tiny fraction of their holdings, buy the entire walled garden, and there's no way to prevent this as long as our money is fungible with theirs. It's the same reason you wouldn't use immibiscoins as a currency: I could just grant myself a zillion of them and buy everything you have. Except that in the case of bitcoin the grant is pre-existing.
Deflationary currencies are fundamentally unstable, just like currencies that one guy can print at will, because they decorrelate quantity and merit.
It should, but nobody can quite agree on exactly what should and shouldn't be allowed. Fraud, everyone agrees no. Porn, drugs, guns, gambling, foreign work, etc. everyone has a different opinion; governments abuse their power to prohibit transacting "harmless" goods/services, the people who transact those things reject the prohubition and turn to grey and black markets, then suffer from the lack of strong regulation therein.
I'm sure I can do what you describe as well. I've actually used LLMs to get myself current on some stuff I knew (old) basics for and they were useful indeed as you say.
I'm also sure it wouldn't help your interns to grow to your level.
Only bad code, and what takes the time is understanding it, not rewriting it, and the LLM doesn't make that part any quicker.
> they save dozens of hours on single sentence prompt tasks, even if you have to review them
Really? How are you reviewing quicker than you could write? Unless the code is just a pile of verbose whatever, reviewing it is slower than writing it, and a lot less fun too.
If that’s your true impetus, please don’t bother. There’s nothing which benefits me about your words being clearer and less open to misinterpretation. You are, of course, completely welcome to disagree with and ignore my suggestions.
> thanks to the introduction of the table saw.
That makes absolutely no difference at all. And it doesn’t matter anymore either, the harm to your point is already done, no one’s going back to it now to reinterpret it. I was merely pointing out what I see as having gone wrong so you can avoid it in the future. But again, entirely up to you what you do with the feedback.
You do not know if LLMs I the future can’t replace humans. You can only say right now they can’t replace humans. In the future the structure of the LLM may be modified or it become one module out of multiple that is required for agi.
These are all plausible possibilities. But you have narrowed it all down to a “no”. LLMs are just tools with no future.
The real answer is nobody knows. But there are legitimate possibilities here. We have a 5 year trend line projecting higher growth into the future.
The majority of artists, and of all other groups, are in fact mediocre with mediocre virtues, so enough incentives would turn most of them into Whatever shillers like the post describes.
So a non expert cannot easily determine, even if they do stumble upon “Serious art” by happenstance, whether it’s just another empty scheme or indeed someting more serious.
Maybe if they spend several hours puzzling over the artist’s background, incentives, network, claims, past works, etc… they can be 99% sure. But almost nobody likes any particular piece of work that much upon first glance, to put in that much effort.
Slide rule are good for only a couple of digits of precision. That's why shopkeepers used abacuses not slide rules.
I have a hard time understanding your hypothetical. What does it mean to hallucinate at the 0.5% level? That repeating the same question has a 0.5% chance of giving the wrong answer but otherwise it's precise? In that case you can repeat the calculation a few times to get high certainty. Or that even if you repeat the same calculation 100 times and choose the most frequent response then there's still a 0.5% chance of it being the wrong one?
Or that values can be consistently off by within 0.5% (like you might get from linear interpolation)? In that case you are a bit better than a slide rule for estimating, but not accurate enough for accounting purposes, to name one.
Does this hypothetical calculator handle just plus, minus, multiply, and divide? Or everything that a TI 84 can handle? Or everything that WolframAlpha can handle?
If you had a slide rule and knew how to use it, when would you pay $40/month for that calculator service?
Far fewer people make their living as musicians than did even thirty years ago, and being a musician is no longer a viable middle-class career. Jaron Lanier, who has written on this, has argued that it's the direct result of the advent of the internet, music piracy, and streaming -- two of which originally were expected or promised to provide more opportunities for artists, not take them away.
So there really are far fewer drummers, and fewer, worse opportunities for those who remain, than there were within the living memory of even most HN users, not because some specific musical technology advanced but because technological advancement provided an easier, cheaper alternative to human labor.
Sound familiar yet?
But since almost all the tokens bear neither interest nor dividends, it looks a lot more like a casino.
But my man, how do you know if it explains perfectly well or is just generating plausible-sounding slop? You're learning, so by definition you don't know!
So should AI also indicate this to me? That the job will suck, and there would be bad coworkers around me in the job?
If an LLM hallucinates and you don't know better, it can be bad. Hopefully people are double checking things that really matter, but some things are a little harder to fact check.
However, I do have a pressure cooker and a rice cooker that gets a lot of use. They're extremely reliable and don't use much electricity and I can schedule what they do, which is bulk cooking without me having to care about it while it happens.
but 200 years ago there were loads of ceramic manufactures, employing hundreds of thousands of skilled potters. even 50 years ago, there were thousands of skilled ceramists in the UK. now its single person artisans, like your very talented other half.
Now, that reduction in work force too 200 years and mirrors the industrial revolution. GenAI is looking like its going to speed run that in ~5-7 years
I should be more clear, there is a difference between dead art (memorizing stories) and non viable career for all but 1% of people compared to now. I'm talking about the latter.
Basically most of it is all noise but eventually something blows ahead of the curve and shines for a couple of seconds before the curve/wave catches up and engulfs it again.
Most of what "goes" ahead of the "curve" is achievements of other people. You, I, and most other people are just watching other exceptional or lucky individuals shine for a couple of seconds. We are all watchers. There's no difference from our perspective if the people we are watching are humans or AI. It's not us anyway.
If you are an exceptional genius who regularly achieves things that go past the "curve" then you will be affected. You will watch AI achieve things faster than you, do better than you, etc. In this case you are affected.
Keep in mind that this "curve" I talk about is both global and has many many localized instances. Like a superstar at a start up is a localized example of an individual blowing past a localized curve, and Einstein is a global example of an individual excelling past a global curve.
I was getting at the valuation and the value extraction that is derived solely from the sigmoid-like climb of value in cryptocurrency, which is driven by speculation on BTC as an asset, not just by increasing adoption for regular economic transactions backed by external value.
Sure, the more traction and trust this system gains, the last part of my sentence (BTC backing "real value") might increasingly become true, making my argument moot. But the almost feudalistic problems of "initial" allocation are not unimportant, in my view.
I also checked with some Japanese and my own notes contain more errors than the LLMs output by a large margin.
Zelle, previously known as clearXchange, and whatever else, but if you had an account at one of the bigger bank, it has long been trivial to send money to each other.
https://en.wikipedia.org/wiki/Zelle
> In April 2011, the clearXchange service was launched. It was originally owned and operated by Bank of America, JPMorgan Chase, and Wells Fargo.[6][7] The service offered person-to-person (P2P), business-to-consumer (B2C), and government-to-consumer (G2C) payments.[8]
There's no need to break normal editing behavior.
Alec Watson of Technology Connections points out that GPS routing defaults to minimizing time, even when that may not the most logical way to get somewhere.
His commentary, which starts at https://youtu.be/QEJpZjg8GuA?t=1804 , is an example of his larger argument about the complacency of letting automation do things for you.
His example is a Google Maps routing which saves one minute by going a long way to use a fast expressway (plus $1 toll), rather than more direct but slower state routes and surface streets. It optimizes one variable - time - of the many variables which might be important to you - wear&tear, toll costs, and the delight of knowing more about what's going on in the neighborhood.
His makes the point that he is not calling for a return to paper maps, but rather to reject automation complacency, which I'll interpret as letting the GPS figure everything out for you.
We've all heard stories of people depending on their GPS too much then ending up stuck on a forest road, or in a lake, or other place which requires rescue - what's the equivalent failure mode with a calculator?
yes, that means we are at an impasse. use the search, ask an LLM, if even that is too much initiative for a quite outdated skeptic to take even now then I can’t help you
there are hundreds of billions, maybe trillions in volume going through financial services on blockchains and it doesn’t matter if financial services isn’t a sector you care about or are the target audience for, there are people there who will pay to solve problems they have
>> We all need jobs to afford living
In many countries this is already not true. There is already enough wealth that there is enough for everyone.
Yes, the western mindset is kinda "you don't work, you don't get paid". The idea that people can "free load" on the system is offensive at a really deep emotional level. If I suggest that a third of the people can work, and the other 2 thirds do nothing, but get supported, most will get distressed [1]. The very essence of US society is that we are defined by our work.
And yet if 2% of the work force is in agriculture, and produce enough food for all, why is hunger a thing?
As jobs become ever more productive, perhaps just -considering- a world where worth is not linked to output is a useful thought exercise.
No country has figured this out perfectly yet. Norway is pretty close. Lots of Europe has endless unemployment benefits. Yes, there's still progress to be made there.
[1] of course, even in the US, already it's OK for only a 3rd to work. Children don't work. Neither do retirees. Both essentially live off the labor of those in-between. But imagine if we keep raising the start-working age, while reducing retirement benefits age....
Well, humans typically read way faster than they write, and if you own the code, have a strict style guide, etc, it is often pretty simple to understand new or modified code, unless you are dealing with a novel concept, which I wouldn't trust a LLM with anyway.
Also, these non-human entities we are discussing tend to output code very fast.
This is all just my opinion of course, but it's easy to expect that being an LLM that knows all there is to know about every subject written in books and the internet would be enough to do every office work that can be done with a computer. Yet strangely enough, it isn't.
At this point they still lack the necessary feedback mechanism (the senses) and ability to learn on the job so they can function on their own independently. And people have to trust them, that they don't fail in some horrible way and things like that. Without all these they can still be very helpful, but can't really "replace" a human in doing most activities. And also, some people seem to possess a sense of aesthetics and a wonderful creative imagination, things that LLMs don't really display at this time.
I agree that nobody knows the answer. If and when they arrive at that point, by then the LLM part would probably be just a tiny fraction of their functioning. Maybe we can start worrying then. Or maybe we could just find something else to do. Because people aren't tools, even when economically worthless.
If you're a nationalist, your worry is obvious enough, but if you're a humanist, then it's wonderful that the more downtrodden are going to improve their station, while the better off wait for them.
Shopkeepers did integer math, not decimal. They had no need for a slide rule, an abacus is faster at integer math, a slide rule is used for dealing with real numbers.
what's your basis for this claim? please provide some data showing number of drummers over time, or at least musicians, over the last fifty years or so. I tried searching and couldn't find anything but you're so confident, I'm sure you have a source you could link
What he is comparing was a brief time in history that the music industry was at the absolute peak.
We have just gone back to normal that most people can't make money being a musician just like being an actor is not really a viable middle class career option.
Sure, when I graduated high school you could have just made a living in a local rock band because everyone wanted to be in a band to be the next Guns n Roses.
To me, it is like how even Hitler wanted to be a painter because everyone wanted to be a painter at that time. The way everyone wanted to be a rock star when i was a teenager.
Times change and the collective artistic taste change with them. So many musicians are doing better than ever before because of youtube too.
I play the baroque lute and I can tell you that it is much tougher to get a gig in a bar today than it was in 1650 in France.
The best lutenist though are killing it on youtube with Bach videos.
its just a filtering problem
there are screeners to narrow everything down just like for the stock exchanges
Similarly, I used the Busuu app for a while. One of its features is that you can speak or type sentences, and ask native speakers to give feedback. But of course, you have to wait for the feedback (especially with time zone differences), so they added genAI feedback.
Like, what’s the point of this? It’s like that old joke: “We have purposely trained him wrong, as a joke”!
Firstly the "theft component" isn't exactly new. There have always been rich and poor.
Secondly everyone is standing on the shoulders of giants. The Beatles were influenced by the works of others. Paul and John learned to write by mimicking other writers.
That code you right is the pinicle of endless work dine by others. By Ada Lovelace, and Charles Babbage, and Alan Turing and Brian Kernigan and Denis Ritchie and Doug Englebart and thousands and thousands more.
By your logic the entire output of all industries for all foreseeable generations should be universally owned. [1]
But that's not the direction we have built society on. Rather society has evolved in the US to reward those who create value out of the common space. The oil in Texas doesn't belong to all Texans, it doesn't belong to the pump maker, it belongs to the company that pumps the oil.
Equally there's no such thing as 'your data'. It's your choice to publish or not. Information cannot be 'owned'. Works can be copyrighted, but frankly you have a bigger argument on that front going after Google (and Google Books, not to mention the Internet Archive) than AI. AI may train on data you produced, but it does not copy it.
[1] I'm actually for a basic income model, we don't need everyone working all day like it's 1900 anymore. That means more taxes on companies and the ultra wealthy. Apparently voters disagree as they continue to vote for people who prefer the opposite.
This is 100% just the mechanization of a cultural refinement process that has been going on since the dawn of civilization.
I agree with you regarding how the bounty of GenAI is distributed. The value of these GenAI systems is derived far more from the culture they consume than the craft involved in building them. The problem isn't theft of data, but a capitalist culture that normalizes distribution of benefit in society towards those that are already well off. If the income of those billionaires and the profits of their corporations were more equitably taxed, it would solve a larger class of problems, of which this problem is an instance.
Poor typists always slow down processes, and frequently become a bottleneck, local or global. If you can speed up a process by only ten seconds per Thing, by improving someone’s typing skills or by fixing bad UI and workflow, you only have to process 360 Things in a day (which is about one minute per Thing) to have saved an entire hour.
It can be very eye-opening to watch a skilled typist experienced with a software system that was designed for speed, working. In more extreme cases, it can be that one person can do the work of ten. In more human-facing things, it can still be at least a 50% boost, so that two skilled people can beneficially replace three mediocre.
If I do know the area, I can simply find the area and then drive straight there. There's still value in my mind of being aware of your environment.
When it's just reading, perhaps, but to review you have to read carefully and understand. It's like the classic quote that if you're writing code at the limits of your ability you won't be able to debug it.
> if you own the code, have a strict style guide, etc, it is often pretty simple to understand new or modified code, unless you are dealing with a novel concept, which I wouldn't trust a LLM with anyway
The way I see it if the code is that simple and repetitive then probably that repetition should be factored out and the code made a lot shorter. The code should only need to express the novel/distinctive parts of the problem - which, as you say, are the parts we wouldn't trust an LLM with.
Luckily there's more than one cryptocurrency. Many of the current generation have asymptotically constant tail emissions, which doesn't really solve the underlying mismatch between emission and demand, but at least doesn't make it deliberately bad.
Well, there was one that tried to maintain a constant US$ price by cross-leveraging all of the risk onto a sister currency, but that crashed pretty hard (partly by the algorithm not working as well as thought, and partly by deliberate rugpull).
One that I'm aware of is radically different and allows both positive and negative balances that decay towards zero; although I don't really like that one's implementation, that feels like an idea worth exploring. It's pretty much incompatible with any traditional currency though.
Last time I named specific cryptocurrencies I got downvotes for advertising so I won't.
Sure, I'd be down for it. But I think that's less realistic and instead the government will make my country try to make for a rise of feudalism instead. Except most will starve. it will make the great depression seem quaint in comparison.
>And yet if 2% of the work force is in agriculture, and produce enough food for all, why is hunger a thing?
I'd love to know the answer too. I think we both know the true answer, though.
>I'm not suggesting equality or communism. I'm suggesting a bottom threshold where you get enough even if you don't work.
That's the issue. Even right now, we don't get enough even if you do work full time. Living is unsustainable. How is the problem going to get better, especially when those who would have to pay will instead lobby to not pay out to the people?
Like:
1- Put it on the edge of the plate, not in the middle
2- Check every X seconds and give it a stir
3- Don't put metal in
4- Don't put sealed things in
5- Adjust time for wetness
6- Probably don't put in dry things? (I believe you needed water for a microwave to really work? Not sure, haven't tried heating a cup of flour or making a caramel in the microwave)
7- Consider that some things heat weirdly, for example bread heats stupid quick and then turns into stone equally as quick once you take it out.
...
Anyone can claim to have “real art”.
Could you provide data defending this claim? Without it, and even with it, all I see in your comment is that you're begging the question or shrugging your shoulders at the data and saying, "so what," not actually or substantively disagreeing with anything Lanier has said or written.
What caused the decline? You seem very sure you know the answer, and yet your answer basically seems to be to stop asking the question or investigating: "music was at its peak, so obviously it declined." If music was at some absolute peak, why was that? "It was at its peak" isn't an answer. It's a restatement of the question.
And can you show me that there were fewer musicians per capita, making less money in adjusted terms, twenty or thirty years earlier?
And do you have any data showing that more than a tiny, miniscule fraction of musicians are doing "better than ever before" thanks specifically to YouTube? "So many" is slippery and frustratingly difficult to quantify in a manner that lets me evaluate its accuracy.
And we want to normalize that? We can also go back to the times were 8 YO's worked in teh mines and humans worked 7 days a week for 12+ hours.
>The way everyone wanted to be a rock star when i was a teenager.
Everyone wanted to be an astronaut growing up at one point too. That trend faded... but it turns out astronauts can get a living wage. Or at least, I sure hope so.
Also, no. I would not invest in this hype bubble. We're definitely getting an AI crash within the next 5-7 yeras, a la the dotcom crash. I prefer safer stocks if I have the choice.
I am more on your side, personally. When learning, I do not want to spend half my time scrutinizing the teacher. At least not on the objective fundamentals. If the fundamentals are broken, how do I trust anything built on top?
And that's the scary thing: LLM's don't "build on top", they more or less do the equivalent of running around the globe first and come back with an answer.
The difference is that, for better or worse, our society chose to follow the model that artists own the rights to their work. That work was used for commercial purposes without the consent of the artists. Therefore it's theft.
I actually do believe all industries should be worker owned because the capitalists have proven they can't be trusted as moral and ethical stewards, but I'm specifically talking about GenAI here.
I think it's disingenuous to say that people have a choice to publish data or not in an economic system that requires them to publish or produce in order to survive. If an artist doesn't produce goods, then they aren't getting paid.
Also this is kind of a pedantic rebuttal but the GenAI software technically does first have to copy the data to then train on it :) But seriously, it can be prompted to reproduce copyrighted works and I don't think the rights holders particularly care how that happens, rather that it can and does happen at all.
We have not achieved GAI yet, so comparing the human mind to what's ultimately a robotic database is one ultimately made on a flimsy premise. AI isn't generating a style anymore than a user bashing 3 templates together.
Even when we hit GAI, we have different issues. a brain can't perfectly recite a song they hear. It will not objectively interpret the same soundwaves from brain to brain. It will not react the same way from brain to brain due to different experiences and perspectives. What GAI develops into may or may not take all these into account.
>If the income of those billionaires and the profits of their corporations were more equitably taxed, it would solve a larger class of problems, of which this problem is an instance.
Sure. We can also make sure they pay the artists being copied frmo while we tax them more too. Let's not dismiss theft by casting off the theft as magic. This isn't Now you see me...
I use it all the time, pretty much zero issues.
And universally, if you have nothing, you lead a very poor life. You life in a minimal house (trailer park, slums, or housing without running water nor working sewage). You don't have a car, you can't travel, education opportunities are limited.
Most kids want to become independent, so they have control over their spending and power over their own lives. Poor retirees are unhappy, sometimes even have to keep working to afford living.
Norway is close because they have oil to sell, but if no one can afford to buy oil, and they can't afford to buy cars, nor products made with oil, Norway will soon run out of money.
You can wonder, why is Russia attacking Ukraine, russia has enough land, doesn't need more. But in the end there will always be people motivated by more power and money, which makes it impossible to create this communism 2.0 that you're describing.
I just type the address into Google maps, or place a pin manually, then hit the start button. It'll tell me every step of the way. Keep right at the fork. In a hundred meters, turn left. Turn left. Take the second exit in the roundabout. Your destination is a hundred meters ahead on the right.
It's great and it works almost flawlessly. Even better if you have a passenger able to keep an eye on it for those times when it isn't flawless.
(I’ve heard the fans that you hear are there to reflect the micro waves and make them bounce all over the place but I don’t know if that’s true. Regardless, most models have a spinning plate which will constantly reposition the food as it cooks.)
I don't think the outsourcees are reaching their full potential being paid $2/hr to make American corporations billions. They are simply going to survive and up themselves to a liveable standard.
https://en.wiktionary.org/wiki/couple - "(informal) a small number"
FWIW, "Maximum accuracy for standard linear slide rules is about three decimal significant digits" - https://en.wikipedia.org/wiki/Slide_rule
While yes, "Astronomical work also required precise computations, and, in 19th-century Germany, a steel slide rule about two meters long was used at one observatory. It had a microscope attached, giving it accuracy to six decimal places" (same Wikipedia page), remember that this thread is about calculating devices one might carry in one's pocket, have on one's self, or otherwise be able to "grab".
(There's a scene in a pre-WWII SF story where the astrogators on a large interstellar FTL spacecraft use a multi-meter long slide rule with a microscope to read the vernier scale. I can't remember the story.)
My experience is that I can easily get two digits, but while I'm close to the full three digits, I rarely achieve it, so I wouldn't say you get three decimal digits from a slide rule of the sort I thought was relevant.
I'm a novice at slide rules, so to double-check I consulted archive.org and found "The slide rule: a practical manual" at https://archive.org/details/sliderulepractic00pickrich/page/...
> With the ordinary slide rule, the accuracy obtainable will largely depend upon the precision of the scale spacings, the length of the rule, the speed of working, and the aptitude of the operator. With the lower scales it is generally assumed that the readings are accurate to within 0.5 per cent. ; but with a smooth-working slide the practised user can work to within 0.25 per cent
That's between 2 and 3 digits. You wouldn't do your bookkeeping with it.
>> It feels like the same attitude that happened with Bitcoin, the same smug nose-wrinkling contempt. Bitcoin is the future. It’ll replace the dollar by 2020. You’re gonna be left behind. Enjoy being poor. Sure thing, Disco Stu!
There's also the principle of the matter that we shouldn't have to pay for a share of something that was built using our collective unpaid labor/property without our consent.
The two parties that end up viable tend to be financed quite heavily by said wealthy, including being proped by the media said wealthy control.
The more right wing side will promise tax cuts (also for the poor that don't seem to materialize) while the more left wing side will promise to tax the rich (but in an easily dodgeable way that only ends up affecting the middle class).
Many people understand this and it is barely part of the consideration in their vote. The last election in the US was a social battle, not really an economic one. And I think the wealthy backers wanted it that way.
We seem to value the actual craft less and less with each passing day. so everything slows down overall.
I'm also aware of the failure modes with GPS complacency, including its incomplete knowledge of the variables that I find important.
And that's with something that makes mistakes far less often than LLMs and related technology.
Which is why I don't think that your mention of GPS use is a strong counter-example to bryanrasmussen's comment against using hallucinating devices.
and yes that is a tiny fraction of all financial services volume at all, or even involving crypto assets.
I was referring to the traffic onchain as that’s what’s interesting
permissionless liquidity providing in unincorporated partnerships is still novel and unique to those platforms and highly lucrative. on assets that dont need permission to be listed anywhere.
They do a lot more things though, which microwaves don't. Pizza, for example, has to be cooked properly, not with a microwave. If I can only have one, I'll take the mini conventional oven.
Citation needed.
Older microwaves had a fan-like metal stirrer inside the cooking box, that would continuously re-randomize where the waves went. This has been out of fashion for several decades.
curious rich people.
I'm not suggesting equality or communism. I'm suggesting a bottom threshold where you get enough even if you don't work.
Actually Norway gets most of that from investments, not oil. They did sell oil, but invested that income into other things. The sovereign wealth fund now pays out to all citizens in a sustainable way.
Equally your understanding of dole living in Europe is incomplete. A person on the dole in the UK is perfectly able yo live in a house with running water etc. I know people who are.
Creating a base does not mean "no one works". Lots of people in Europe have a job despite unemployment money. And yes most-all jobs pay better than unemployment. And yes lifestyles are not equal. It's not really communism (as you understand it.)
This is not about equal power or equal wealth. It's about the idea that a job should not be linked to survival.
Why is 60 the retirement age? Why not 25? That sounds like a daft question, but understanding it can help understand how dome things that seem cast in stone, really aren't.
I don't know how to respond to your comment in that context.
Do you double-check your calculator all the time, to ensure it's not giving you the wrong answer?
As to Alec Watson's commentary about GPS, how do you know the area well enough to make judgements if you always follow the GPS routing which avoids the neighborhood?
Composition is part of it, but it isn’t the whole story. A microwave oven is a resonant cavity. There are standing electromagnetic waves in there, in several different modes. They have peaks and nulls. That’s why many microwaves have a rotating plate. It physically moves the food relative to the standing waves.
How many musicians or artists are finding their need to explore similarly met by opportunities that simply didn’t exist in 2002? If art is expression than we should expect the people who might have wielded a brush or guitar to be building software instead.
If this is you, I recommend Rick Rubin’s The Creative Act. It’s as pure an expression of the way I like to work in music, as it is aligned with how I think about code and product design.
In a more socialist context, UBI is an answer.
In a more communist context, taking over all labs for the people is an answer.
In a dictatorial context, banning AI is an answer.
What are you recommending?
Its all about blockspace and the commodity that pays for the blockspace
That was true when bitcoin was 2 cents and its true when bitcoin is $109,000 and 2 cents
I mean, are you enjoying your socioeconomic status? the chronology was very clear to some and they were right. It wasn’t luck, it wasn’t a really binary proposition. you can read old threads around the net from 2012 to see exactly where we are going. you can help make that happen or passively act surprised. pretty much every theorized issue can be programmed away, thats what gives people confidence in this asset class compared to others.
Not necessarily. That's a result of first past the post, not of voting in general. ranked choice voting solves a lot of this extremism 2 party system. The dominant parties need to at least pretend to appel enough to moderatism that a 3rd party isn't outvoting both of them.
>Many people understand this and it is barely part of the consideration in their vote. The last election in the US was a social battle, not really an economic one.
So the right wingers never really cared about inflation, egg prices, and the job market. I wish I could pretend to be shocked at this point.
I would contest some of your points though.
Firstly, not every country votes, not all that vote have 2 viable parties, so that's a flaw in your argument.
Equally most elections produce a winner. That winner can, and does, get stuff done. The US is paralyzed because it takes 60% to win the senate, which hasn't happened for a while. So US elections are set up so "no one wins". Which of course leads to overreach etc that we're seeing currently.
There's a danger when living inside a system that you assume everywhere else is the same. There's a danger when you live in a system that heavily propagandizes its own superiority, that you start to feel like everywhere else is worse.
If we are the best, and this system is the best, and it's terrible, then clearly all hope is lost.
But what I maybe, just maybe, all those things you absolutely, positively, know to be true, are not true? Is that even worth thinking about?
New Merriam-Webster dictionary, 1989, def. 4 "an indefinite small number" - https://archive.org/details/newmerriamwebste00spri/page/180/...
Pocket Oxford English dictionary, 2005, def. 3 "(informal) an indefinite small number" - https://archive.org/details/pocketoxfordengl0000unse_p5e4/pa...
The Random House college dictionary, 1975, def. 6, "a couple of, (Informal) a small number of, a few"
If I spend a lot of time in an area I learn it and don't need gps to navigate it, however I might use gps just to find specific addresses as I don't usually memorize every street name. I also usually find that Google maps chooses perfectly sensible routes anyway, I don't see much point in trying to second guess it. Oh maybe I can save a minute or two or save a few kilometers by avoiding a highway, honestly who cares? I certainly don't. It will usually offer multiple route alternatives anyway, your ideal route or something close to it is probably among them.
Living on welfare in the Netherlands is not a good life, and definitely not something we should accept for the majority of the people.
Being retired on only a state pension is a bad life, you need to save for retirement to have a good life. And saving takes time, that's why you can't retire at 25.
More subtly, I'll modity the dictorial context to require payment to any sources an AI uses, and strong enforcement of infringements on AI. The core problem with capitalistic society is that money tends to bubble up to the top and then stay there. The goal of regulation should partly be to make sure that money is incentivized to be not stay up top in stocks.
Given the current job market: very few. They didn't become SWE founders, they were thrown into dead end jobs as a means to survive. At best, maybe they became music teachers to try and keep the spark alive.
The survivor's bias is pretty strong here.
>If art is expression than we should expect the people who might have wielded a brush or guitar to be building software instead.
everyone expresses differently. Too bad that not all expressions lead to a career that sustains oneself. If you really believe AI will take over programmming, what's the next frontier after building software?
Secondly, most software is product, not art. Most people aren't going to feel like they are expressing anything as they pump out CRUD widgets. That's just modern day pencil pushing.
Yup, we really need to fix this problem in many countries. Ranked choice is a great idea that should be pushed for.
> So the right wingers never really cared about inflation, egg prices, and the job market. I wish I could pretend to be shocked at this point.
That was my perception of it at least. I'm not a US citizen. Job market might have been a big one but even that is partially social as a rejection of globalism.
> That was true when bitcoin was 2 cents
I largely agree with you, but to nitpick: when bitcoin was 2 cents, blockspace was free, and miners regularly accepted zero-fee transactions into blocks. Today, you're not getting a transaction into a block without paying at least a couple hundred sats. Your statement is true today, but it wasn't like this until April-May 2012 when Satoshi Dice started filling blocks up. See Fig 3 on page 9 of <https://maltemoeser.de/paper/transaction-fees.pdf> or look through some early blocks.
But I know people whose preference would be something like Ron Paul > Bernie Sanders > Trump > Kamala, which might sound utterly bizarre until you realize that there are multiple factors at play and "we want tax cuts for the rich" is not one of them.
And all as a side gig, too. for a craft their forefathers spent entire careers honing. Is that really how a craftman masters their craft?
As an anecdote, in my country there is a very popular brand of supermarket pizzas, Casa Tarradellas. I never buy them but a friend of mine used to eat them really frequently. So once he shows up at my house with one, and I say OK, I'm going to heat it. I serve it, he tries a bite and is totally blown away. He says "What did you do? I've been eating these pizzas for years and they never taste like this, this is amazing, the best Casa Tarradellas pizza I've ever had".
The answer was that he used the microwave and I had heated it in the regular oven...
I'm saying that the blind acceptance of the status quo does not allow for that status to be questioned.
You see the welfare amounts, or retirement amounts as limited. Well then, what would it take to increase them? How could a society increase productivity such that more could be produced in less time?
Are some of our mindsets preventing us from seeing alternatives?
Given that society has reinvented itself many times through history, are more reinvention possible?
People are welcome to whatever preference they like. Democracy let's them choose. But US democracy is deliberately planned to prefer the "no one wins" scenario. That's not the democracy most of the world uses.
if you expected applications to be deployed that would take up block space when used, and were going to build those applications yourself, then it was still rational
in 2012 people were describing smart contracts, joint custody accounts to secure assets better, and many other applications that are commonplace and have their own critique and discussion now
its like seeing an island full of resources and realizing that the bridges and ferry routes haven't been built yet. that
1) you can get to that island yourself before everyone else
2) you can also build the bridge and put up a toll booth
3) other bridges will be built
4) and other people can also come to the island early at great difficulty too
the same play is still true on other blockchains, and sometimes back again on bitcoin
I’ve done the trade many times over the past 15 years
no, just corporate greed and political corruption. If we wanna change that, words won't do at this point.
>Given that society has reinvented itself many times through history, are more reinvention possible?
Yes, and through what catalyst has society reinvented itself? Reasonable discourse to a civil population appealing to emotion and reason? A sudden burst of altruism to try and cement a positive legacy?
It will reinvent itself eventually. Definitely in my lifetime. I don't know how many of us will survive to see the other side.
I have never had that issue when heating stuff up. Your pizza example is not reheating (and generally you never want to reheat anything that’s supposed to be crispy in the microwave; though not on the stove top either).
Here's a 2010 Satoshi post in a thread about transaction fees stating "we should always allow at least some free transactions": https://satoshi.nakamotoinstitute.org/posts/bitcointalk/thre...
If you had said "2 dollars" instead of "2 cents", we would be in complete agreement. All I'm saying is that mandatory transaction fees were not baked in at 2 cents.
the only thing that mattered was the malleability of the system to meet its goal of attracting transactions, including the future malleability of what you are pointing out. blockspace became scarce. it was predictable that anything that undermined that would be excised out of the network.