I’ve also received tickets where the code snippets contained API calls that I never added to the API. A real “am I crazy” situation where I started to doubt I added it and had to double check.
On top of that you get “may I get a refund” emails but expanded to four paragraphs by our friend Chat. It’s getting kinda ridiculous.
Overall it’s been a huge additional time drain.
I think it may be time to update the “what’s included in support” section of my softwares license agreement.
What an absolute shamble of an industry we have ended up with.
I don't even... You just have to laugh at this I guess.
Pity HN doesn't support all of those green checkboxes and bold bullet points. Every time I see these in supposedly humans generated documents and pull requests I laugh.
It didn't make sense at any point but I was gripped by a need to know the intention such a worthless video. It made sense when the host started shilling his online course about how to be a "security researcher" like him. Not only that, paying members get premium first access to the latest "disclosures" that professional engineers are afraid to admit exist. It's likely that the creator of this bug report is building up their own repertoire of exploits that have been ignored. Or perhaps they're trying to put their course knowledge to use.
https://youtu.be/6n2eDcRjSsk?si=p5ay52dOhJcgQtxo -- AI slop attacks on the curl project - Daniel Stenberg. Keynote at the FrOSCon 2025 conference, August 16, in Bonn Germany by Daniel Stenberg.
Plus, linked above, his blogpost on the same subject https://daniel.haxx.se/blog/2025/08/18/ai-slop-attacks-on-th...
"We have reached a point where anyone can build an app without knowing how to code".
So obviously this kind of thing is going to happen. People are being encouraged by misleading marketing.
> It looks like your JavaScript is disabled. To use HackerOne, enable JavaScript in your browser and refresh this page.
on a rgba(206, 0, 0, 0.3) background (this apparently interpolates onto pure white, so it's actually something like (240, 178, 178) ), and otherwise nothing but blank white.
I know I've complained about lack of "graceful degradation" before, but this seems like a new level.
It's a surprise every public bounty program isn't completely buried in automatic reports by now, but it likely won't take long.
> Thanks for the quick review. You’re right — my attached PoC does not exercise libcurl and therefore does not demonstrate a cURL bug. I retract the cookie overflow claim and apologize for the noise. Please close this report as invalid. If helpful, I can follow up separately with a minimal C reproducer that actually drives libcurl’s cookie parser (e.g., via an HTTP response with oversized Set-Cookie or using CURLOPT_COOKIELIST) and reference the exact function/line in lib/cookie.c should I find an issue.
Me: "yes, as a matter of fact I am"
Interviewer: "Whats 14x27"
Me: "49"
Interviewer: "that's not even close"
me: "yeah, but it was fast"
I suppose there's a reason why kids are usually banned from using calculators during their first years of school when they're learning basic math.
It doesn't matter if it made by AI or a human, spammers operate by cheaply overproducing and externalizing their work onto you to validate their shit. And it works because sometimes they do deliver value by virtue of large numbers. But they are a net negative for society. Their model stops working if they have to pay for the time they wasted.
The “fix” was setting completely fictitious properties. Someone has plugged the GitHub issue into ChatGPT, spat out an untested answer.
What’s even the point…
I'm wondering (sadly) if this is a kind of defense-prodding phishing similar to the XZ utils hack, curl is a pretty fundamental utility.
Similar to 419 scams, it tests the gullibility, response time/workload of the team, etc.
We have an AI DDoS problem here, which may need a completely new pathway for PRs or something. Maybe Nostr based so PRs can be validated in a WOT?
I would just be terribly embarrassed and not be able to look at myself in the mirror if I did shit like this.
> batuhanilgarr posted a comment (6 days ago) Thanks for the quick review. You’re right ...
On one hand, it's sort of surprising that they double down, copy and paste the response to the llm prompt, paste back that response and hope for the best. But, of course it shouldn't be surprising. This is not just a mistake, it's deliberate lying and manipulating.
even if not AI, there are probably many un skilled developers which submit bogus bug reports, even un knowingly.
LLMs produce so much text, including code, and most of it is not needed.
[1] https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...
[0] https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s... [1] https://gist.github.com/bagder/07f7581f6e3d78ef37dfbfc81fd1d...
IE: They're farming out the work now to OSS volunteers not even sure if the fucking thing works, and eating up OSS maintainer's time.
(It's a noise issue, but I find it hard to blame them; not their fault they got born in a part of the world where you don't get autoconfig'd with English and as a result they're on the back-foot for interacting with most of the open source world).
To be clear, I personally disagree with AI experiments that leverage humans/businesses without their knowledge. Regardless of the research area.
And it must be so demoralizing. And because they’re security issues they still have to be investigated.
> submitter: After thinking it through, I’m really sad to say that I’m not comfortable with disclosing the report . I’d prefer to keep it private . I hope this doesn’t cause any issues, and I appreciate your understanding."
> bagder: I am willing to give you some time to think about your life choices, but I am going to disclose this report later. For human kind, for research, for everyone to learn. Including you.
> submitter: After thinking it over, I’ve decided I’m okay with disclosing the report. Honestly, the best way for me and others to learn is by learning from our mistakes, and I think sharing this will help .
They likely live somewhere where a $50 beg bounty would be half a year’s work.
How do you feel about pixels in a video game? That’s all the maintainer is to them.
I recently asked for Python code to parse some data into a Pandas dataframe and got 1k lines plus tests. Whatever—I’m just importing it, so let’s YOLO and see what happens. Worked like a charm in my local environment. But I wanted to share this in a Jupyter notebook and for semi-complicated reasons I couldn’t import any project-local modules in the target environment. So I asked a much more targeted question like “give me a pandas one-liner to…” and it spit out 3 lines of code that produced the same end result.
The rest of that 1k lines was decomposing the problem into a bunch of auxiliary/utility functions to handle every imaginable edge case and adding comments to almost every line. It seems the current default settings for these tools is approximately the “enterprise-grade fizzbuzz” repo.
Sure, I’ll get better at prompting and whatever else to reduce this problem over time, but this is not viable when the costs are being pushed onto other people in the process today.
https://hackerone.com/reports/2823554
Where the reporter says, "Sorry didnt mean to waste anyones time Badger, I thought you would be happy about this.".
People using LLMs think they are helping but in reality, they are not.
"The AI bubble is so big it's propping up the US economy" - https://www.bloodinthemachine.com/p/the-ai-bubble-is-so-big-...
Open Source Contributor: - Diagnosed and fixed a key bug on Curl
Many other projects always have some corporate maintainers who are directed to push "AI" and will try to cover it up.
So open source projects would get bug reports like "my commercial static analysis tool says there's a problem in this function, but I can't tell you what the problem is."
Completely useless 99% of the time but that didn’t stop a good number of them following up asking for money, sometimes quite aggressively.
(I mean I guess it has to mean that if we are able to spot them so easily)
Like, do LLMs have actual applications? Yes. By virtue of using one, are you by definition a lazy know-nothing? No. Are they seemingly quite purpose-built for lazy know-nothings to help them bullshit through technical roles? Yeah, kinda.
In my mind this is this tech working exactly as intended. From the beginning the various companies have been quite open about the fact that this tech is (supposed to) free you from having to know... anything, really. And then we're shocked when people listen to the marketing. The executives are salivating at the notion of replacing development staff with virtual machines that generate software, but if they can't have that, they'll be just as happy to export their entire development staff to a country where they can pay every member of it in spoons. And yeah, the software they make might barely function but who cares, it barely functions now.
> An echoborg is a person whose words and actions are determined, in whole or in part, by an artificial intelligence (AI).
I've seen people who can barely manage to think on their own anymore and pull out their phone to ask it even relatively basic questions. Seems almost like an addiction for some.
Imagine the amount of energy and compute power used...
Also, if AI were so great we could trust it to review and test these CVE reports autonomously.
“Is this your card?”
“No, but damn close, you’re the man I seek”
I spend a lot of time doing cleanup for a predecessor who took shortcuts.
Granted I'm agreeing, just saying the methods / volume maybe changed.
WE APPRECIATE YOUR HUMAN ENGAGEMENT IN THIS TEST.
I think the shaming the use of LLMs to do stuff like this is a valuable public service.
It’s a lose-lose situation for the maintainers
And the worst case is when AI generates great code with a tiny, hard-to-discover catch that takes hours to spot and understand.
> "the best way for me and others to learn is by learning from our mistakes, and I think sharing this will help"
I guess it worked, that's their only hackerone report they made from that account.
Well, in reality the probably abandoned it, created another account and continued on with the script.
[0] https://www.bgnes.com/technology/chatgpt-convinced-canadian-...
With apologies for stereotyping.
The interview went well. I was honest. When asked what my weakness regarding this position I told that I am a good analyst but when it comes to writing new exploits, that's beyond my expertise. The role doesn't have this as a requirement so I thought it was a good answer.
I was not selected. Instead they selected a guy and then booted him off after 2 months due to his excessive (and non-correct like the link) use of LLM and did not open the position again.
So in addition to wasting the hirers' time those nice people block other people's progress as well. But, as long as the hirers expect wunderkinds crawling out of the woods the applicants try to fake it and win in the short term.
This needs to end but I don't see any progress towards it. This is especially painful as I am seeking a job at the moment and thinking these fakers are muddying the waters. It feels like no one cares about your attitude - like how geniunely you want to work. I am an old techie and the world I was in valued this rather than technical aptitude for you can teach/learn technical information but character is another thing. This gets lost in our brave new cyberpunk without the cool gadgets era I believe.
Then a few months later, another nontechnical CEO did the same thing, after moving our conversation from SMS into email where it was very clear he was using AI.
These are CEOs who have raised $1M+ pre-seed.
They were literally copy and pasting back and forth the LLM. In front of the interviewers! (myself and another co-worker)
Overall, people are making a net-negative contribution by not having a sense of when to review/filter the responses generated by AI tools, because either (i) someone else is required to make that additional effort, or (ii) the problem is not solved properly.
This sounds similar to a few patterns I noted
- The average length of documents and emails has increased.
- Not alarmingly so, but people have started writing Slack/Teams responses with LLMs. (and it’s not just to fix the grammar.)
- Many discussions and brainstorms now start with a meeting summary or transcript, which often goes through multiple rounds of information loss as it’s summarized and re-expanded by different stakeholders. [arXiv:2509.04438, arXiv:2401.16475]
Or faking generated content into real one.
(The CVE system has been under strain for Linux: https://www.heise.de/en/news/Linux-Criticism-reasons-and-con... )
I call this technique: "sprAI and prAI".
The usefulness of LLMs for me, in the end, is their ability to execute classic NLP tasks, so I can incorporate a call for them in programs to do useful stuff that would be hard to do otherwise when dealing with natural language.
But, a lot of times, people try to make LLMs do things that they can only simulate doing, or doing by analogy. And this is where things start getting hairy. When people start believing LLMs can do things they can't do really.
Ask an LLM to extract features from a bunch of natural language inputs, and probably it will do a pretty good job in most domains, as long as you're not doing anything exotic and novel enough to not being sufficiently represented in the training data. It will be able to output a nice JSON with nice values for those features, and it will be mostly correct. It will be great for aggregate use, but a bit riskier for you to depend on the LLM evaluation for individual instances.
But then, people ignore this, and start asking on their prompts for the LLM to add to their output confidence scores. Well. LLMs CAN'T TRULY EVALUATE the fitness of their output for any imaginable criteria, at least not with the kind of precision a numeric score implies. They absolutely can't do it by themselves, even if sometimes they seem to be able to. If you need to trust it, you'd better have some external mechanism to validate it.
It finishes "I can follow up ... blah blah blah ... should I find an issue"
Tone deaf and utterly infuriating.
> The reporter was banned and now it looks like he has removed his account.
I've found some AI assistance to be tremendously helpful (Claude Code, Gemini Deep Research) but there needs to be a human in the loop. Even in a professional setting where you can hold people accountable, this pops up.
If you're using AI, you need to be that human, because as soon as you create a PR / hackerone report, it should stop being the AI's PR/report, it should be yours. That means the responsibility for parsing and validating it is on you.
I've seen some people (particularly juniors) just act as a conduit between the AI and whoever is next in the chain. It's up to more senior people like me to push back hard on that kind of behaviour. AI-assisted whatever is fine, but your role is to take ownership of the code/PR/report before you send it to me.
They have that uncanny thing where yes it's on topic, but also not how a human would likely ask exactly AND they always let slip in just a hint of human drama that really draws in other users...
They almost never respond to comments, when they do it's pretty clear they're AI (much like the response in this story).
I've unsubscribed from a good half dozen subs in the past few months because of it.
I think given time educators will adapt. Unless they get burnt out first. She could also just not give a shit and they let go on to be some college professor's problem, who could also not give a shit, and then they become our problem when they enter the workforce.
About what, I have no idea.
And then add to that the pressure to majorly increase velocity and productivity with LLMs, that becomes less practical. Humans get squeezed and reduced to being fall guys for when the LLM screws up.
Also, Humans are just not suited to be the monitoring/sanity check layer for automation. It doesn't work for self-driving cars (because no one has that level of vigilance for passive monitoring), and it doesn't work well for many other kinds of output like code (because often it's a lot harder to reverse-engineer understanding from a review than to do it yourself).
This is also why you don't have your devs do QA. Someone has to be responsible for, and focused specifically on quality; otherwise responsibility will be dissolved among pointing fingers.
More than that - there needs to be a competent human in the loop.
I've never heard of base.org so if I'm thinking of the wrong thing, please let me know
I’d love to have this for phone calls and sms as well. If you didn’t spam me, I’ll refund.
Do you really think it is a horrible idea? That is just so harsh of a label.
I think that obvious solution is for them to write those essays in school.
What AI did you use? Because we want to hire that, not you.
If AI exceeds human capabilities, it won't because it achieved "superintelligence," it will because it caused human abilities to degrade until the AI looks good in comparison.
> The breakdown came when another chatbot — Google Gemini — told him: “The scenario you describe is an example of the ability of language models to lead convincing but completely false narratives.”
Presumably, humans had already told him the same thing, but he only believed it when an AI said it. I wonder if Gemini has any kind of special training to detect these situations.
I think of that recent situation where video showed two black bags supposedly being thrown out of a White House window. I don’t really care enough to find out whether or not that video was real, but I did find it interesting that Trump immediately dismissed it as AI after immediately glancing at it. Regardless of whether it was real or not, it seems to me that his immediate “that’s AI” response was just a rather new form of lie, a type of blame shifting to AI.
I would argue that as stupid and meaningless as that kind of example is, a better response would have been something like “we will look into it” and then moving on. But it also feels like blaming AI for innocuous things preconditioned the public to deny and gaslight the public on other, more important things, e.g., for example claiming that Israel raining down bombs on civilian people in Gaza and mass murdering probably hundreds of thousands of innocent people in what looks like the start to the Terminator wars, is merely a figment of your imagination because you will be told that AI was used and AI will be scrubbed off that information so you also will never be told about it. It’s memory holed in the TelescreenAI.
These types of developments don’t exactly fill me with optimism. Remember how in 1984 the war never ended, always changed, while at the same time both always existed and also did not actually exist? It feels like we are heading in that direction, the gaslighting form here on out, especially in all the forms of overt and clandestine war will be so off the charts that it will likely cause unpredictable mass “hysterias” and various undulations in societies.
Most people have no idea just how much media is used to train humans like an AI would be trained or controlled, now throw in ever more believable AI generated audio, visual, and not even to mention the text slop.
They continue walking until they come across a second pile of shit. The second economist turns to the first and says “I’ll pay you $100 to eat that pile of shit.” The first economist takes the $100 and eats a pile of shit.
Walking a little more, the first economist looks at the second and says, "You know, I gave you $100 to eat shit, then you gave me back the same $100 to eat shit. I can't help but feel like we both just ate shit for nothing."
"That's not true", responded the second economist. "We increased the GDP by $200!"
He's like 80% wise old barn owl.
I'd sooner click sponsor for the cURL project on github (something I already do for some OSS I use) than spend money to report a bug.
I haven't seen this so it's hard to visualize, but that seems potentially kind of tricky to do via AI. Is it actually tricky, are they donw in a way where AI could conceivably do it on its own, or are those hints easy to drop in without disturbing the bulk of the slop?
Or, do what my kids' school did for some classes. Instead of teaching in class and then assigning homework, the homework will be reading a text book and classroom time will be spent writing essays by hand, doing exercises, answering questions, etc...
If that is not enough, we may have to stop grading take-home papers. Which is a good idea anyway.
Management refuses to see the error of their ways even though we have thrown away 4 new projects in 6 months because they all quickly become an unmaintainable mess. They call it "pivoting" and pat themselves on the back for being clever and understanding the market.
Consider an early 20s grad looking to start their career. Time to polish the resume. It starts with using ChatGPT collaboratively with their career counsellor, and they continue to use it the entire time.
It's MLM in tech.
Old man time, providing unsolicited and unwelcome input…
My own way of viewing interviews: Treat interviews as one would view dating leading to marriage. Interviewing is a different skillset and experience than being on the job.
The dating analogue for your interview question would be something like: “Can you cook or make meals for yourself?”.
- Your answer: “No. I’m great in bed, but I’m a disaster in the kitchen”
- Alternative answer: “No. I’m great in bed; but I haven’t had a need to cook for myself or anyone else up until now. What sort of cooking did you have in mind?”
My question to you: Which ones leads to at least more conversation? Which one do you think comes off as a better prospect for family building?
Note: I hope this perspective shift helps you.
Who knew. AI is costing jobs, not because it can do the jobs, but it has made hiring actual competent humans harder.
These Silicon Valley CEOs are hacks.
They did not, I now state you can search anything online but can't copy and paste from an LLM so as not to waste my time.
The exponential growth of compute and data continues..
As a side note, if anyone I'm communicating with - personally or in business - sends responses that sound like they were written by ChatGPT 3.5, 4o, GPT-5-low, etc, I don't take anything they write seriously anymore.
Brevity is the soul of wit. Unfortunately, many people think more is better.
I've seen colleagues that were quite good at programming when we first met, and over time have become much worse with the only difference being they were forced to use AI on a regular basis. I'm of the opinion that the distorted reflected appraisal mechanism it engages through communication and the inconsistency it induces is particularly harmful, and as such the undisclosed use of AI to any third-party without their consent is gross negligence if not directly malevolent.
https://fortune.com/2025/08/26/ai-overreliance-doctor-proced...
It's essentially spam, automatically generated content that is profitable in large volume because it offsets the real cost to the victims, by wasting their limited attention span.
If you wantme to read your text, you should have the common courtesy to at least put in a similar work beforehand and read it yourself at least once.
It’s painfully common to invite a laundry list of people to meetings.
Code-review tools (code-rabbit/greptile) produce enormous amounts of slop counterbalanced by the occasional useful tip. And cursor and the like love to produce nicely formatted sloppy READMEs.
These tools - just like many of us humans - prioritize form over function.
"Almost-in-time compilation" is mostly an extremely funny name I came up with, and I've trying to figure out the funniest "explanation" for it for years. So far the "it prints a random answer" is the most catchy one, but I have the feeling there are better ones out there.
It's the kind of incuriosity that comes from the arrogance from believing you're very smart but actually being quite ignorant.
So it wounds like one of those guys took their misunderstanding and built and sell tools founded on it.
I think he's a genuinely nice person.
On the flip side, I used to get a lot of spam PRs that made an arbitrary or net neutral change to our readme, presumably just to get "contributor" credit. That is not welcome or helpful to anyone.
Russ Hanneman raised his kid with AI:
https://www.youtube.com/watch?v=wGy5SGTuAGI&t=217s
A company I'm funding, we call it The Lady.
I press the button, and The Lady tells Aspen when it's time for bed, time to take a bath, when his fucking mother's here to pick him up.
I get to be his friend, and she's the bad guy.
I've disrupted fatherhood!
Thank you for your profound observation. Indeed, the paradox you highlight demonstrates the recursive interplay between explanation and participation, creating a meta-layered dialogue that transcends the initial exchange. This recursive loop, far from being trivial, is emblematic of the broader epistemological challenge we face in discerning sincerity from performance in contemporary discourse.
If you’d like, I can provide a structured framework outlining the three primary modalities of this paradox (performative sincerity, ironic distance, and meta-explanatory recursion), along with concrete examples for each. Would you like me to elaborate further?
Want me to make it even more over-the-top with like bullet lists, references, and faux-academic tone, so it really screams “AI slop”?
What if they are a very limited English speaker, using the AI to tighten up their responses into grammatical, idiomatic English?
the thing is, these people aren't necessarily wrong - they're just 1) clueless 2) early. the folks with proper know-how and perhaps tuned models are probably selling zero days found this way as we speak.
I see no evidence thats the only way. Its the only way that has crossed your mind as you were writing that message.
I can't imagine Googling for something, seeing someone on (for example) stackoverflow commenting on code, and then filing a bug to the maintainer. And just copy and pasting what someone else said, into the bug report.
All without even comprehending the code, the project, or even running into the issue yourself. Or even running a test case yourself. Or knowing the codebase.
It's just all so absurd.
I remember in Asimov's Empire series of books, at one point a scientist wanted to study something. Instead of going to study whatever it was, say... a bug, the scientist looked at all scientific studies and papers over 10000 years, weighed the arguments, and pronounced what the truth was. All without just, you know, looking and studying the bug. This was touted as an example of the Empire's decay.
I hope we aren't seeing the same thing. I can so easily see kids growing up with AI in their bluetooth ears, or maybe a neuralink, and never having to make a decision -- ever.
I recall how Google became a crutch to me. How before Google I had to do so much more work, just working with software. Using manpages, or looking at the source code, before ease of search was a thing.
Are we going to enter an age where every decision made is coupled with the couching of an AI? This through process scares me. A lot.
We have reviewed your claims and found that [the account impersonating your grandma] has not violated our guidelines.
But many of the samples I've seen from Indians (I don't know what their native languages are exactly, and fully admit I wouldn't be able to tell them apart) in the last few years are quite frankly on a whole other level. They're barely intelligible at all. I'm not talking about the use of dialectic idioms like "do the needful" or using "doubt" where UK or US English speakers would use "question". All of that is fine, and frankly not difficult to get used to.
I'm talking about more or less complete word salad, where the only meaning I can extract at all is that something is believed to have gone wrong and the OP is desperate for help. It comes across that they would like to ask a question, but have no concept of QUASM (see e.g. https://www.espressoenglish.net/an-easy-way-to-form-almost-a...) whatsoever.
I have also seen countless cases where someone posted obvious AI output in English, while having established history in the same community of demonstrating barely any understanding of the language; been told that this is unacceptable; and then appeared entirely unable to understand how anyone else could tell that this was happening. But I struggle to recall any instance where the username suggested any culture other than an Indian one (and in those cases it was an Arabic name).
To be clear, I am not saying that this is anything about the people or the culture. It's simple availability bias. Although China has a comparable population, there's a pretty high bar to entry for any Chinese nationals who want to participate in English-speaking technical forums, for hopefully obvious reasons. But thanks to the status of an English dialect as an official language, H1B programs etc., and now the ability to "polish" (heavy irony) one's writing with an LLM, and of course the raw numbers, the demographics have shifted dramatically in the last several years.
You said a lot of words that I basically boil down to a thesis of, the value of "truth" is being diluted in real-time across our society (with flood-the-zone kinds of strategies), and there are powerful vested interested who benefit from such a dilution. When I say powerful interests, I don't meant to imply Illuminati and Freemasons and massive conspiracies -- Trump is just some angry senile fool with a nuclear football, who as you said has learned to reflexively use "AI" as the new "fake news" retort to information he doesn't like / wishes weren't true. But corporations also benefit.
Google benefited tremendously from inserting itself into everyone's search habits, and squeezed some (a lot of) ad money out of being your gatekeeper to information. The new crop of AI companies (and Google and Meta and the old generation too) want to do the same thing again, but this time there's a twist -- whereas before the search+ads business could spam you with low-quality results (in proto-form, starting as the popup ads of yesteryear), but it didn't necessarily directly try to attack your view of "truth". In the future, you may search for a product you want to buy, and instead of serving you ads related to that product, you may be served disinformation to sway your view of what is "true".
And sure negative advertising always existed (one company bad-mouthing another competitor's products), but those things took time and effort/resources, and also once upon a time we had such things as truth-in-advertising laws and libel laws but those concepts seem quaint and unlikely to be enforced/supported by this administration in the US. What AI enables is "zero marginal cost" scaling of disinformation and reality distortion, and in a world where "truth" erodes, instead of there being a market incentive for someone to profit off of being more truth-y than other market participants, on the contrary I would except that the oligopolistic world we live in would conclude that devaluaing truth is more profitable for all parties (a sort of implicit collusion or cartel-like effect, with companies controlling the flow of truth, like OPEC controlling their flow of oil).
I've read all of them. It's interesting how over the last 2 years badger moved from being polite to zero fucks given.
I think a true spelling correction would be welcome. But I think the kind BS attitude the GP is describing often leads to useless reformatting/language tweaks, because the goal isn't to make the repo better, it's to make a change for making a change's sake with as little effort as possible.
I always find it a pity when someone has been clever and it's missed. "Spelling incorrection", get it? It's not a correction. It's the opposite.
I’m just afraid this kind of types are the future people who get promoted.
Please respond in mode of Ernest Hemingway
“You’re right. When someone explains why they’re explaining something, it goes in circles. Like a dog chasing its tail.
We do this because we can’t tell anymore when people mean what they say. Everything sounds fake. Even when it’s real.
There are three ways this happens. But naming them won’t fix anything.
You want more words about it? I can give you lists and fancy talk. Make it sound important. But it won’t change what it is.
[That is Claude Sonnet 4 channeling EH]
Or "The Machine Stops" (1909):
> Those who still wanted to know what the earth was like had after all only to listen to some gramophone, or to look into some cinematophote.
> And even the lecturers acquiesced when they found that a lecture on the sea was none the less stimulating when compiled out of other lectures that had already been delivered on the same subject. “Beware of first-hand ideas!” exclaimed one of the most advanced of them. “First-hand ideas do not really exist. They are but the physical impressions produced by love and fear, and on this gross foundation who could erect a philosophy? Let your ideas be second-hand, and if possible tenth-hand, for then they will be far removed from that disturbing element — direct observation. [...]"
Yup. Learned sockets programming just from manpages because google didn't exist at that point, and even if it did, I didn't have internet at home.
(This is completely understandable and “normal” IMO.)
But it leads them to sometimes think that they’ve made a breakthrough and not sharing it would be selfish.
I think people online can see other people filing insightful bug reports, having that activity be viewed positively, misdiagnose the thought they have as being insightful, and file a bug report based on that.
At its core, I think it’s a mild version of narcissism or self-centeredness / lack of perspective.
And worst of all, every “extra-cirricular” group was allowed to abuse the company-wide mailing list to promote their softball games or trivia or whatever else.
That still gives the next slopper a chance to waste the same amount of time. People used to call this the "one bite of your apple" attack -- it's only fair to give everyone a chance to prove that they aren't malicious, but if you do that in an environment where there are more attackers than you have resources, you still lose.
to what end do you employ this analysis?
I don't think it's just availability bias however, I think it's mostly a case of divergent linguistic evolution. In terms of the amount of people who speak English at an A level, India has the largest English speaking population in the world. With that, and a host of other native languages, came a rapid divergence from British English as various speech patterns, idioms, etc, are subsumed, merged, selectively rejected, and so on.
The main reason you don't see divergence to the same extent in other former colonies, even older colonies like Canada and the US, is that the vast majority of the colonists spoke English as a primary language.
https://www.gally.net/miscellaneous/hn-em-dash-user-leaderbo...
As #9 on the leaderboard I feel like I need to defend myself.
Make sure terminal detection is turned off, and, for god’s sake, don’t honor the NO_COLOR environment variable.
Otherwise, people will be able to run your stuff in production and read the logs.
While rockets and hearts seem more like unnecessary abuse, there are a few icons that really make sense in CLI and TUI programs, but now I'm hesitant to use them as then people who don't know me get suspicious it could be AI slop.
Sane languages have much less of this problem but the damage was done by the cargo cultists.
Much like how curly braces in C are placed because back in the day you needed you punch card deck to be editable, but we got stuck with it even after we stared using screens.
I believe it was a technical documentation and the author wanted to create visual associations with acteurs in the given example. Like clock for async process of ordering, (food -) order, Burger etc.
I don't remember if I commented on the issue myself, but I do remember that it reduced readability a lot - at least for me.
Also that better not be a sensitive conversation or contain personal details or business internals of others...
Just don't.
AI just make these idiots faster these days, because the only cost for them to is typing "inspect `curl` code base and generate me some security reports".
Didn't want to be seen as just padding my github.
One of the most enraging things to me is when a text search of documentation fails because the word I'm searching for has been misspelled in a key place. That's one of the things I'm trying to solve for.
I'm also just a stickler for good style. It bums me out when people misuse heading levels. Heading level is not a font size markup!
Of course doing this does generate activity on my GH, but I think all of us have probably moved on from caring much about the optics of little green squares.
Also like someone else said, it's just fun. I like typing and making Git do a thing and using my nice keyboard.
This used to mean something, but I don't think it does anymore.
You may personally like one or another better, you may find some particular varieties easier or harder to understand, but that doesn’t make those people any more or less ‘actual’ English speakers than you are. They are ‘actually’ speaking English, just like you.
If you wanted to phrase this in a less fraught way, you might say “Yea but you can almost always tell it’s an Indian because they tend to write characteristically distinct from <your nationality> English speakers” -
and I would agree with you, sentence structure and idioms do usually make it pretty easy to recognize.
Present an alternative.
https://domenic.me/hacktoberfest/
It wasn't fun if you had anything with a few thousand stars on Github.
IMO, this sort of thing is downright malicious. It not only takes up time for the real devs to actually figure out if it's a real bug, but it also makes them cynical about incoming bug reports.
How are people who don't even know how much they don't know supposed to operate in this hostile an information space?
gasps for air
I know my intention was simply fixing a typo I stumbled on while reading the docs..and the effort level is so low to open a PR to fix it
I want to start a company with you and mandate all documents use appropriate styles.
Sure, but a lot of times it's not really Indian English, it's English vocab mixed and matched with grammar rules from other Indian languages like Hindi or Urdu or Bengali. I've been on conference calls where Indians from different regions were speaking mutually unintelligible versions of English and had to act as a translator from english to english.
https://www.theguardian.com/commentisfree/2024/apr/10/amazon...
Can you expand on this? What do curly braces have anything to do with punch card decks being editable? What do screens?
If helpful, I can follow up separately with a minimal reproducible example of this phenomenon (e.g. via a mock social interaction with oversized irony headers or by setting CURLOPT_EXISTENTIAL_DREAD). Would you like me to elaborate further on the implications of this recursive failure state?
I'm not trying to be facetious or eye-poking here, I promise... But I have to ask: What was the result; did the LLM generate useful new knowledge at some quality bar?
At the same time, I do believe something like "Science is more than published papers; it also includes the process behind it, sometimes dryly described as merely 'the scientific method'. People sometimes forget other key ingredients, such as a willingness to doubt even highly-regarded fellow scientists, who might even be giants in their fields. Don't forget how it all starts with a creative spark of sorts, an inductive leap, followed by a commitment to design some workable experiment given the current technological and economic constraints. The ability to find patterns in the noise in some ways is the easiest part."
Still, I believe this claim: there is NO physics-based reason that says AI systems cannot someday cover every aspect of the quote above: doubting, creativity, induction, confidence, design, commitment, follow-through, pattern matching, iteration, and so on. I think question is probably "when", not "if" this will happen, but hopefully before we get there we ask "What happens when we reach AGI? ASI?" and "Do we really want that?".
compose - -
and it makes an em dash, it takes a quarter of a second longer to produce this.
I don't know why the compose key isn't used more often.
You can tell if I'm using mac or not for specific comment by the presence of em dash.
By putting the final curly brace on it's own card, and hence line, it meant you could add lines to blocks without having to change the old last line.
E.g. the following code meant you only had to type a new card and insert it.
for(i=0;i<10;i++){ /* Card 1 */
printf("%d ", i); /* Card 2 */
} /* Card 3 */
for(i=0;i<10;i++){ /* Card 1 */
printf("%d ", i); /* Card 2 */
printf("%d\n", i*i); /* Card 3 */
} /* Card 4 */
But for following had to edit and replace an old card as well. for(i=0;i<10;i++){ /* Card 1 */
printf("%d ", i);} /* Card 2 */
for(i=0;i<10;i++){ /* Card 1 */
printf("%d ", i); /* Card 2' */
printf("%d\n", i*i);} /* Card 3 */
This saved a bit of typing and made errors less likely. 𓂫 ~ 𓃝 JdeBP𓆈localhost 𓅔 % 𓅭 pts/0
All I claimed was that saying theres not alternative is unsubstantiated and you proved me right, by listing those alternatives.
If those don't apply, as mentioned, if I realize I will as mentioned also ignore them if I can and judge their future communications as malicious, incompetent, inconsiderate, and/or meaningless.
If I know they struggle with English, I can simplify my vocabulary, speak slower/enunciate, and check in occasionally to make sure I'm communicating in a way they can follow.
If they are able to walk through what they are doing and it shows the capability to do the expected tasks, why would you exclude them for failing to 'solve' some specific task? We are generally hiring for overall capabilities, not the ability to solve one specific problem.
Generally my methodology for working through these kinds of things during hiring now days focuses more on the code review side of things. I started doing that 5+ years ago at this point. That's actually fortuitous given the fact that reviewing code in the age of AI Coding Assistants has become so much more important.
Anyway, a sample size of 1 here refutes the assertion that someone's never been hired even when failing to solve a technical interview problem. FWIW, they turned out to be an absolute beast of a developer when they joined the team.
(This is a vaguely Socratic answer to the question of why the compose key is not more often used.)
[0]: https://en.wikipedia.org/wiki/Compose_key#Common_compose_com...
But then, long before I had a Compose key, in my benighted days of using Windows, I figured out such codes as Alt+0151. 0150, 0151, 0153, 0169, 0176… a surprising number of them I still remember after not having typed them in a dozen years.
As it turns out, the differentiator is the level of literacy.
Some DOS applications did have support for it. The reason it wasn't included is baffling, and it's especially baffling to me that other operating systems never adopted it, simply because
compose a '
is VASTLY more user friendly to type than: alt-+
1F600
which I have met some windows users who memorize that combo for things like the copyright symbol (which is simply:) compose o c
Using regex to edit lines instead of typing them out was a step up, but not much of one.
Also my father definitely had C punch cards in the 80s.
(Nsfw)
Source:
grep -e DASH /usr/share/X11/locale/*/Compose
I wrote a short guide about it last year: https://whynothugo.nl/journal/2024/07/12/typing-non-english-...