Conversely, I sometimes present it with some existing code and ask it what it does. If it gets it wrong, that's a good sign my API is confusing, and how.
These are ways to harness what neural networks are best at: not providing accurate information but making shit up that is highly plausible, "hallucination". Creativity, not logic.
(The best thing about this is that I don't have to spend my time carefully tracking down the bugs GPT-4 has cunningly concealed in its code, which often takes longer than just writing the code the usual way.)
There are multiple ways that an interface can be bad, and being unintuitive is the only one that this will fix. It could also be inherently inefficient or unreliable, for example, or lack composability. The AI won't help with those. But it can make sure your API is guessable and understandable, and that's very valuable.
Unfortunately, this only works with APIs that aren't already super popular.
I know nothing about this. I imagine people are already working on it, wonder what they've figured out.
(Alternatively, in the future can I pay OpenAI to get ChatGPT to be more likely to recommend my product than my competitors?)
It’s not that they added a new feature because there was demand.
They added a new feature because technology hallucinated a feature that didn’t exist.
The savior of tech, generative AI, was telling folks a feature existed that didn’t exist.
That’s what the headline is, and in a sane world the folks that run ChatGPT would be falling over themselves to be sure it didn’t happen again, because next time it might not be so benign as it was this time.
I also went back to just sleeping on those flights and using connected models for most of my code generation needs.
We get ~50% of traffic from ChatGPT now, unfortunately a large amount of the features it says we have are made up.
I really don't want to get into a state of ChatGPT-Driven-Development as I imagine that will be never ending!
If the super-intelligent AI understands human incentives and is in control of a very popular service, it can subtly influence people to its agenda by using the power of mass usage. Like how a search engine can influence a population's view of an issue by changing the rankings of news sources that it prefers.
Maybe I'll turn it into a feature request then ...
Figuring out the paths that users (or LLMs) actually want to take—not based on your original design or model of what paths they should want, but based on the paths that they actually do want and do trod down. Aka, meeting demand.
On the other hand, adding a feature because you believe it is a feature your product should have, a feature that fits your vision and strategy, is a pretty sound approach that works regardless of what made you think of that feature in the first place.
I recall that early on a coworker was saying that ChatGPT hallucinated a simpler API than the one we offered, albeit with some easy to fix errors and extra assumptions that could've been nicer defaults in the API. I'm not sure if this ever got implemented though, as he was from a different team.
> Correct feature almost exists
> Creator profile: analytical, perceptive, responsive;
> Feature within product scope, creator ability
> Induce demand
> await "That doesn't work" => "Thanks!"
> update memory
Had something similar happen to us with our dev-tools saas. Non devs started coming to the product because gpt told them about it. Had to change parts of the onboarding and integration to accommodate it for non-devs who were having a harder time reading the documentation and understanding what to do.
The user is not going to understand this. The user may not even need that feature at all to accomplish whatever it is they're doing. Alternatives may exist. The consequences will be severe if companies don't take this seriously.
It'll all be fine in a few years. :-;
"Would you still have added this feature if ChatGPT hadn't bullied you into it?" Absolutely not.
I feel like this resolves several longstanding time travel paradox tropes.
Some people express concerns about AGI creating swarms of robots to conquer the earth and make humans do its bidding. I think market forces are a much more straightforward tool that AI systems will use to shape the world.
> Hallucinations can sometimes serve the same role as TDD. If an LLM hallucinates a method that doesn’t exist, sometimes that’s because it makes sense to have a method like that and you should implement it.
— https://www.threads.com/@jimdabell/post/DLek0rbSmEM
I guess it’s true for product features as well.
I find it interesting that any user would attribute this issue to Soundslice. As a user, I would be annoyed that GPT is lying and wouldn't think twice about Soundslice looking bad in the process
1. I might consider a thing like that like any other feature request. If not already added to the feature request tracker, it could be done. It might be accepted or rejected, or more discussion may be wanted, and/or other changes made, etc, like any other feature request.
2. I might add a FAQ entry to specify that it does not have such a feature, and that ChatGPT is wrong. This does not necessarily mean that it will not be added in future, if there is a good reason to do so. If there is a good reason to not include it, this will be mentioned, too. It might also be mentioned other programs that can be used instead if this one doesn't work.
Also note that in the article, the second ChatGPT screenshot has a note on the bottom saying that ChatGPT can make mistakes (which, in this case, it does). Their program might also be made to detect ChatGPT screenshots and to display a special error message in that case.
The users are different, the music that is notated is different, and for the most part if you are on one side, you don't feel the need to cross over. Multiple efforts have been made (MusicXML, etc.) to unify these two worlds into a superset of information. But the camps are still different.
So what ChatGPT did is actually very interesting. It hallucinated a world in which tab readers would want to use Soundslice. But, largely, my guess is they probably don't....today. In a future world, they might? Especially if Soundslice then enables additional features that make tab readers get more out of the result.
I already strongly suspect that LLMs are just going to magnify the dominance of python as LLMs can remove the most friction from its use. Then will come the second order effects where libraries are explicitly written to be LLM friendly, further removing friction.
LLMs write code best in python -> python gets used more -> python gets optimized for LLMs -> LLMs write code best in python
what a wonderful incident / bug report my god.
totally sorry for the trouble and amazing find and fix honestly.
sorry i am more amazed than sorry :D. thanks for sharing this !!
It really likes to cogitate on code from several versions ago. And it often insists repeatedly on edits unrelated to the current task.
I feel like I'm spending more time educating the LLM. If I can resist the urge to lean on the LLM beyond its capabilities, I think I can be productive with it. If I'm going to stop teaching the thing, the least it can do is monitor my changes and not try to make suggestions from the first draft of code from five days ago, alas ...
1 - e.g. a 500-line text file representing values that will be converted to enums, with varying adherence to some naming scheme - I start typing, and after correcting the first two, it suggests the next few. I accept its suggestions until it makes a mistake because the data changed, start manual edits again ... I repeated this process for about 30 lines and it successfully learned how I wanted the remainder of the file edited.
If that's too scary, the failed tool call could trigger another AI to go draft up a PR with that proposed tool, since hey, it's cheap and might be useful.
“Sometimes” being a very important qualifier to that statement.
Claude 4 naturally doesn’t write code with any kind of long term maintenance in-mind, especially if it’s trying to make things look like what the less experienced developers wrote in the same repo.
Please don’t assume just because it looks smart that it is. That will bite you hard.
Even with well-intentional rules, terrible things happen. It took me weeks to see some of it.
That's also how I'm approaching it. If all the condensed common wisdom poured into the model's parameters says that this is how my API is supposed to work to be intuitive, how on earth do I think it should work differently? There needs to be a good reason (like composability, for example). I break expectations otherwise.
> I don't have to spend my time carefully tracking down the bugs GPT-4 has cunningly concealed in its code
If anyone is stuck in this situation, give me a holler. My Gmail username is the same as my HN username. I've always been the one to hunt down my coworkers' bugs, and I think I'm the only person on the planet will finds it enjoyable to find ChatGPT'S oversights and sometimes seemingly malicious intent.I'll charge you, don't get me wrong, but I'll save you time, money, and frustration. And future bug reports and security issues.
IMO this has always been the killer use case for AI—from Google Maps to Grammarly.
I discovered Grammarly at the very last phase of writing my book. I accepted maybe 1/3 of its suggestions, which is pretty damn good considering my book had already been edited by me dozens of times AND professionally copy-edited.
But if I'd have accepted all of Grammarly's changes, the book would have been much worse. Grammarly is great for sniffing out extra words and passive voice. But it doesn't get writing for humorous effect, context, deliberate repetition, etc.
The problem is executives want to completely remove humans from the loop, which almost universally leads to disastrous results.
Many many python image-processing libraries have an `imread()` function. I didn't know about this when designing our own bespoke image-lib at work, and went with an esoteric `image_get()` that I never bothered to refactor.
When I ask ChatGPT for help writing one-off scripts using the internal library I often forget to give it more context than just `import mylib` at the top, and it almost always defaults to `mylib.imread()`.
I've found that LLMs can be kind of dumb about understanding things, and are particularly bad at reading between the lines for anything subtle. In this aspect, I find they make good proxies for inattentive anonymous reviewers, and so will try to revise my text until even the LLM can grasp the key points that I'm trying to make.
That's closer to simply observing the mean. For an analogy, it's like waiting to pave a path until people tread the grass in a specific pattern. (Some courtyard designers used to do just that. Wait to see where people were walking first.)
Making things easy for Chat GPT means making things close to ordinary, average, or mainstream. Not creative, but can still be valuable.
So winning AI SEO is not so different than regular SEO.
This would be a world without generative AI available to the public, at the moment. Requiring perfection would either mean guardrails that would make it useless for most cases, or no LLM access until AGI exists, which are both completely irrational, since many people are finding practical value in its current imperfect state.
The current state of LLM is useful for what it's useful for, warnings of hallucinations are present on every official public interface, and its limitations are quickly understood with any real use.
Nearly everyone in AI research is working on this problem, directly or indirectly.
On the bright side, a lot of work is just finding the mean solution so.
Thanks for your words of wisdom, which touch on a very important other point I want to raise: often, we (i.e., developers, researchers) construct a technology that would be helpful and "net benign" if deployed as a tool for humans to use, instead of deploying it in order to replace humans. But then along comes a greedy business manager who reckons recklessly that using said technology not as a tool, but in full automation mode, results will be 5% worse, but save 15% of staff costs; and they decide that that is a fantastic trade-off for the company - yet employees may lose and customers may lose.
The big problem is that developers/researchers lose control of what they develop, usually once the project is completed if they ever had control in the first place. What can we do? Perhaps write open source licenses that are less liberal?
That's how you get economics of scale.
Google couldn't have a human in the loop to review every page of search results before handing them out in response to queries.
That’s like getting rid of all languages and accents and switch to the same language
In both cases, you might get extra bonus usability if the reviewers or the API users actually give your output to the same LLM you used to improve the draft. Or maybe a more harshly quantized version of the same model, so it makes more mistakes.
One of the most dangerous systems an AI can reach and exploit is a human being.
This seems similar, and like a decent indicator that most people (aka the average developer) would expect X to exist in your API.
Your solution is the equivalent of asking Google to completely delist you because one page you dont want ended up on Googles search results.
We don't live in a nice world, so you'll probably end up right.
OTOH it's free(?) advertising, as long as that first impression isn't too negative.
so i am happy you implemented this, and will now look at using your service. thx chatgpt, and you.
It was a plausible answer, and the core of what these models do is generate plausible responses to (or continuations of) the prompt they’re given. They’re not databases or oracles.
With errors like this, if you ask a followup question it’ll typically agree that the feature isn’t supported, because the text of that question combined with its training essentially prompts it to reach that conclusion.
Re the follow-up question, it’s almost certainly the direction that advertising in general is going to take.
Some users might share it. ChatGPT has so many users it's somewhat mind boggling
If “don’t hallucinate” is too much to ask then ethics flew out the window long ago.
Really!?
What?? What does AGI have to do with this? (If this was some kind of hyperbolic joke, sorry, i didn't get it.)
But, more importantly, the GP only said that in a sane world, the ChatGPT creators should be the ones trying to fix this mistake on ChatGPT. After all, it's obviously a mistake on ChatGPT's part, right?
That was the main point of the GP post. It was not about "requiring perfection" or something like that. So please let's not attack a straw man.
And in this case, OP didn't have to take ChatGPT's word for the existence of the pattern, it showed up on their (digital) doorstep in the form of people taking action based on ChatGPT's incorrect information.
So, pattern noticed and surfaced by an LLM as a hallucination, people take action on the "info", nonzero market demand validated, vendor adds feature.
Unless the phantom feature is very costly to implement, seems like the right response.
Meanwhile, sensible people have concluded that, even though it isn’t perfect, Wikipedia is still very, very useful – despite the possibility of being misled occasionally.
> Maybe hallucinations of vibe coders are just a suggestion those API calls should have existed in the first place.
> Hallucination-driven-development is in.
https://x.com/pwnies/status/1922759748014772488?s=46&t=bwJTI...
>> Hallucinations can sometimes serve the same role as TDD. If an LLM hallucinates a method that doesn’t exist, sometimes that’s because it makes sense to have a method like that and you should implement it.
A detailed counterargument to this position can be found here[0]. In short, what is colloquially described as "LLM hallucinations" do not serve any plausible role in software design other than to introduce an opportunity for software engineers to stop and think about the problem being solved.
See also Clark's third law[1].
0 - https://addxorrol.blogspot.com/2025/07/a-non-anthropomorphiz...
https://www.soundslice.com/help/en/player/advanced/17/expand...
That's available for any music in Soundslice, not just music that was created via our scanning feature.
Dynamic, on-the-fly generation & execution is definitely fascinating to watch in a sandbox, but is far to scary (from a compliance/security/sanity perspective) without spending a lot more time on guardrails.
We do however take note of hallucinated tool calls and have had it suggest an implementation we start with and have several such tools in production now.
It's also useful to spin up any completed agents and interrogate them about what tools they might have found useful during execution (or really any number of other post-process questionnaire you can think of).
Example:
https://llama-cpp-agent.readthedocs.io/en/latest/structured-...
It also taught me to be more careful about checkpointing my work in git before letting an agent go wild on my codebase. It left a mess trying to fix its problems.
As the comment above said that we need a human in the loop for better results, Well firstly it also depends on human to human.
A senior can be way more productive in the loop than a junior.
So Everybody has just stopped hiring juniors because they cost money and they will deal with the AI almost-slop later/ someone else will deal with it.
Now the current seniors will one day retire but we won't have a new generation of seniors because nobody is giving juniors a chance or that's what I've heard about the job market being brutal.
What benefit might human review have? Maybe they could make sure the SERP list entries actually have the keywords you're looking for. Even better, they could make sure the prices in the shopping section are correct! Maybe even make sure they relate to the product you actually searched for... I might actually pay money for that.
Would be pretty trivial to have a model which recognises product recommendations in the output and inserts branded equivalents - or nudges the output towards branded equivalents.
Language log has been writing about this for so long it's not even funny: https://languagelog.ldc.upenn.edu/nll/?cat=54
Although the best place to start is probably here: https://languagelog.ldc.upenn.edu/nll/?p=2922
What surprised me initially was just how confidently wrong Llama was... Now I'm used to confident wrongness from smaller models. It's almost like working with real people...
I would go on to say that thisminteraction between ‘holes’ exposed by LLM expectations _and_ demonstrated museerbase interest _and_ expert input (by the devs’ decision to implement changes) is an ideal outcome that would not have occurred if each of the pieces were not in place to facilitate these interactions, and there’s probably something here to learn from and expand on in the age of LLMs altering user experiences.
> But then along comes a greedy business manager who reckons recklessly
Thanks for this. :)
if these developers/researchers are being paid by someone else, why should that same someone else be giving up the control that they paid for?
If these developers/researchers are paying the research themselves (e.g., a startup of their own founding), then why would they ever lose control, unless they sell it?
The problem is that people who are laid off often experience significant life disruption. And people who work in a field that is largely or entirely replaced by technology often experience permanent disruption.
However, there's no reason it has to be this way - the fact people having their jobs replace by technology are completely screwed over is a result of the society we have all created together, it's not a rule of nature.
Stock your underground bunkers with enough food and water for the rest of your life and work hard to persuade the AI that you're not a threat. If possible, upload your consciousness to a starwisp and accelerate it out of the Solar System as close to lightspeed as you can possibly get it.
Those measures might work. (Or they might be impossible, or insufficient.) Changing your license won't.
Examples:
* Active - concise, complete info: The manager approved the proposal.
* Passive - wordy, awkward: The proposal was approved by the manager.
* Passive - missing info: The proposal was approved. [by who?]
Most experienced writers will use active unless they have a specific reason not to, e.g., to emphasize another element of the sentence, as the third bullet's sentence emphasizes approval.
-
edited for clarity, detail
(Unless, on the gripping hand, your image_get function is subtly different from Matlab's imread, for example by not returning an array, in which case a different name might be better.)
You make something, but because you don’t own it—others caused and directed the effort—you don’t control it. But the people who control things can’t make things.
Should only the people who can make things decide how they are used though? I think that’s also folly. What about the rest of society affected by those things?
It’s ultimately a societal decision-making problem: who has power, and why, and how does the use of power affect who has power (accountability).
- Active: The user presses the Enter key.
- Passive: The Enter key is to be pressed.
- Imperative (aka command): Press the Enter key.
The imperative mood is concise and doesn't dance around questions about who's doing what. The reader is expected to do it.
The problem is that many people have only a poor ability to recognize the passive voice in the first place. This results in the examples being clunky, wordy messes that are bad because they're, well, clunky and wordy, and not because they're passive--indeed, you've often got only a fifty-fifty chance of the example passive voice actually being passive in the first place.
I'll point out that the commenter you're replying to used the passive voice, as did the one they responded to, and I suspect that such uses went unnoticed. Hell, I just rewrote the previous sentence to use the passive voice, and I wonder how many people think recognized that in the first place let alone think it worse for being so written.
> If “don’t hallucinate” is too much to ask then ethics flew out the window long ago.
Those sentences aren't compatible.
> but hallucination is a major issue
Again, every official public AI interface has warnings/disclaimers for this issue. It's well known. It's not some secret. Every AI researcher is directly or indirectly working on this.
> is in the opposite direction of the “goal” of AGI
This isn't a logical statement, so it's difficult to respond to. Hallucination isn't a direction that's being headed towards, it's being actively, with intent and $$$, headed away from.
Their requirement is no hallucinations [1], also stated as "be sure it didn't happen again" in the original comment. If you define a hallucination as something that wasn't in the training data, directly or indirectly (indirectly being something like an "obvious" abstract concept), then you've placed a profound constraint on the system, requiring determinism. That requirement fundamentally, by the non-deterministic statistics that these run on, means you cannot use an LLM, as they exist today. They're not "truth" machines - use a database instead.
Saying "I don't know", with determinism is only slightly different than saying "I know" with determinism, since it requires being fully aware of what you do know, not at a fact level, but at a conceptual/abstract level. Once you have a system that fully reasons about concepts, is self aware of its own knowledge, and can find the fundamental "truth" to answer a question with determinism, you have something indistinguishable from AGI.
Of course, there's a terrible hell that lives between those two, in the form of: "Error: Question outside of known questions." I think a better alternative to this hell would be a breakthrough that allowed "confidence" to be quantified. So, accept that hallucinations will exist, but present uncertainty to the user.
I also don’t see the relevance of Clarke’s third law.
Clearly the users are already using ChatGPT for generating some guitar practice, as it is basically infinite free personalized lessons. For practicing they do want to be able hear it to play along at variable speed, maybe create slight variations etc.
Soundslice is a service that does exactly that. Except that before people used to have sheet music as the source. I know way back when I had guitar aspirations, people exchanged binders of photocopied sheet music.
Now they could have asked ChatGPT to output an svg of the thing as sheet music (it does, I tested). Soundslice could have done this behind the scenes as a half hour quick and dirty fix while developing a better and more cost effective alternative.
Look, if at the turn of the century you were a blacksmith living of changing horseshoes, and you suddenly have people mistakenly showing up for a tire change on their car, are you going to blame the villagers that keep sending them your way, or open a tire change service? We know who came out on top.
There is a chasm of difference between being misled occasionally (Wikipedia) and frequently (LLMs). I don’t think you understand how much effort goes on behind the scenes at Wikipedia. No, not everyone can edit every Wikipedia page willy-nilly. Pages for major political figures often can only be edited with an account. IPs like those of iCloud Private Relay are banned and can’t anonymously edit the most basic of pages.
Furthermore, Wikipedia was always honest about what it is from the start. They managed expectations, underpromised and overdelivered. The bozos releasing LLMs talk about them as if they created the embryo of god, and giving money to their religion will solve all your problems.
No, what actually happened is that OP developed a type of chatgpt integration, and a shitty one at that, chatgpt could have just directed the user to the site and told them to upload that image to OP's site. But it felt it needed to do something with the image, so it did.
There's no new value add here, at least yet, maybe if users started requesting changes to the sheet I guess, not what's going on.
I've been wanted them to do this for questions like "what is your context length?" for ages - it frustrates me how badly ChatGPT handles questions about its own abilities, it feels like that would be worth them using some kind of special case or RAG mechanism to support.
But unfortunately what is or isn't an irresponsible use is very easy to debate endlessly in circles. Meanwhile people are being harmed like crazy while we can't figure it out
Also, I’m not suggesting an LLM is actually thinking. We’ve been using “thinking” in a computing context for a long time.
The example in the OP is a common one: ask a model how to do something with a tool, and if there’s no easy way to perform that operation they’ll commonly make up a plausible answer.
Would love love love to hear more on what you are doing here? This seems super fascinating (and scary). :)
Internet posts have a very different style standard than a book.
I disagree. I think it's both. Yes, we need good frameworks and incentivizes on a economic/political level. But also, saying that it's not a tech problem is the same as saying "guns don't kill people". The truth is, if there was no AI tech developed, we would not need to regulate it so that greed does not take over. Same with guns.
I agree. We need a radical change (some version of universal basic income comes to mind) that would allow people to safely change careers if their profession is no longer relevant.
> the fact people having their jobs replace by technology are completely screwed over is a result of the society we have all created together, it's not a rule of nature.
How did the handloom weavers and spinners handle the rise of the machines?Unfortunately, the resulting correlation between the passive voice and formality does sometimes lead poor writers to use the passive in order to seem more formal, even when it's not the best choice.
https://youtube.com/playlist?list=PLNRhI4Cc_QmsihIjUtqro3uBk...
Oh the horror. There are 2 additional words "was" and "by". The weight of those two tiny little words is so so cumbersome I can't believe anyone would ever use those words. WTF??? wordy? awkward?
It means "I decided to do this, but I don't have the balls to admit it."
I don't think mentioning "authors" is absolutely necessary, but I think this is both a faithful attempt to convert this to natural active voice and easier to read/understand.
Well, sort of. You used the passive voice, but you didn't use it on any finite verbs, placing your example well outside the scope of the normal "don't use the passive voice" advice.
Rewriting “the points already made” to “the points people have already made” would not have improved it.
> The problem is executives want to completely remove humans from the loop, which almost universally leads to disastrous results.
Consider:
> Now we are engaged in a great civil war, testing whether that nation, or any nation so conceived and so dedicated, can long endure.
This would not be improved by rewriting it as something like:
> Now the Confederacy has engaged us in a great civil war, testing whether that nation, or any nation whose founders conceived and dedicated it thus, can long endure.
This is not just longer but also weaker, because what if someone else is so conceiving and so dedicating the nation? The people who are still alive, for example, or the soldiers who just fought and died? The passive voice cleanly covers all these possibilities, rather than just committing the writer to a particular choice of who it is whose conception and dedication matters.
Moreover, and unexpectedly, the passive voice "we are engaged" takes responsibility for the struggle, while the active-voice rephrasing "the Confederacy has engaged us" seeks to evade responsibility, blaming the Rebs. While this might be factually more correct, it is unbefitting of a commander-in-chief attempting to rally popular support for victory.
(Plausibly the active-voice version is easier to understand, though, especially if your English is not very good, so the audience does matter.)
Or, consider this quote from Ecclesiastes:
> For there is no remembrance of the wise more than of the fool for ever; seeing that which now is in the days to come shall all be forgotten.
You could rewrite it to eliminate the passive voice, but it's much worse:
> For there is no remembrance of the wise more than of the fool for ever; seeing that everyone shall forget all which now is in the days to come.
This forces you to present the ideas in the wrong order, instead of leaving "forgotten" for the resounding final as in the KJV version. And the explicit agent "everyone" adds nothing to the sentence; it was already obvious.
In any case, if you really believe a disclaimer makes it okay for Google to display blatant misinformation in a first-party capacity, we have little to discuss.
A technology can be extremely useful despite not being perfect. Failure cases can be taken into consideration rationally without turning it into a moral panic.
You have no ability to edit Wikipedia to stop it from lying. Somebody can come along and re-add the lie a millisecond later.
My web browser isn't perfect, but it does not hallucinate inexistent webpages. It sometimes crashes, it sometimes renders wrong, it has bugs and errors. It does not invent plausible-looking information.
There really is a lot middle gound between perfect and "accept anything we give you, no matter how huge the problems".
Ok, sure. But why would you choose to define hallucinations in a way that is contrary to common sense and the normal understanding of what an AI hallucination is?
The common definition of hallucinations is basically: when AI makes shit up and presents it as fact. (And the more technical definition also basically aligns with that.)
No one would say that if the AI takes the data you provide in the prompt and can deduce a correct answer for that specific data —something that is not directly or indirectly present in its training data— it would be hallucinating. In fact that would be an expected thing for an intelligent system to do.
It seems to me you're trying to discuss with something nobody said. You're making it seem that saying "it's bad that LLMs can invent wrong/misleading information like this and present it as fact, and that the companies that deploy them don't seem to care" is equivalent to "i want LLMs to be perfect and have no bugs whatsoever", and then discuss about how ridiculous is to state the latter.
This doesn’t seem likely. The utility is pretty obvious.
> chatgpt could have just directed the user to the site and told them to upload that image to OP's site.
What image? Did you think the first image shown is what is being entered into ChatGPT? It’s not. That’s what the site expects to be uploaded to them. There’s no indication that the ChatGPT users are scanning tabs. ChatGPT is producing ASCII tabs, but we aren’t shown what input it is in response to.
Let alone that dynamically modifying the base system prompt would likely break their entire caching mechanism given that caching is based on longest prefix, and I can't imagine that the model's system prompt is somehow excluded from this.
I understand Wikipedia puts effort in, but it’s irrelevant. As a user, you can never be sure that what you are reading on Wikipedia is the truth. There are good reasons to assume that certain topics are more safe and certain topics are less safe, but there are no guarantees. The same is true of AI.
> Wikipedia was always honest about what it is from the start.
Every mainstream AI chatbot includes wording like “ChatGPT can make mistakes. Check important info.”
That kind of thinking is how you never get new customers and eventually fail as a business.
Same could be said for the Internet as we know it too. Literally replace AI with Internet above and it reads equally true. Some would argue (me included some days) we are worse off as a society ~30 years later. That’s also a legitimate case that can be made it was a huge benefit to society too. Will the same be said of AI in 2042?
Oh wait, that's not the disneyfied technooptimistic version of Luddites? Sorry.
"A decision was made to..." is often code for "The current author didn't agree with [the decision that was made] but it was outside their ability to influence"
Often because they were overruled by a superior, or outvoted by peers.
Although arguably it would be clearer with the active voice and which specific teams / level of leadership aligned on it, usually in the active voice people just use the royal “we” instead for this purpose which doesn’t add any clarity.
Alternatively sometimes I don’t know exactly who made the decision, I just learned it from an old commit summary. So in that case too it’s just important that some people at some time made the decision, hopefully got the right approvals, and here we are.
Note that "never being wrong" can also be achieved by an "I looked into it, and there's no clear answer.", which is the correct answer for many questions (humans not required).
> it sometimes renders wrong
Is close to equivalent.
This is where your analogy is falling apart; of course web browsers do not "invent plausible-looking information" because they don't invent anything in the first place! Web browsers represent a distinct set of capabilities, and as you correctly pointed out, these are often riddled with bugs and errors. If I was making a browser analogy, I would point towards fingerprinting; most browsers reveal too much information about any given user and system, either via cross-site cookies, GPU prints, and whatnot. This is an actual example where "ethics flew out the window long ago."
As the adjacent commenter pointed out: different software, different failure modes.
> in a sane world the folks that run ChatGPT would be falling over themselves to be sure it didn’t happen again
> If “don’t hallucinate” is too much to ask then ethics flew out the window long ago.
Neither of those ("didn't happen again" and "don't hallucinate") are logically ambiguous or flexible. I can only respond to what they wrote.
I understand this is a bit deeper into one of the _joke_ threads, but maybe there’s something here?
There is a distinction to be made between artificial intelligence and artificial consciousness. Where AI can be measured, we cannot yet measure consciousness despite that many humans could lay plausible claim to possessing consciousness (being conscious).
If AI is trained to revere or value consciousness while simultaneously being unable to verify it possesses consciousness (is conscious), would AI be in a position to value consciousness in (human) beings who attest to being conscious?
In the past, new jobs appeared that the workers could migrate to.
Today, it seems that AI may replace jobs much quicker than before and it's not clear to me which new jobs will be "invented" to balance the loss.
Optimists will say that we have always managed to invent new types of work fast enough to reduce the impact to society, but in my opinion it is unlikely to happen this time. Unless the politicians figure out a way to keep the unemployment content (basic income etc.),
I fear we may end up in a dystopia within our lifetimes. I may be wrong and we could end up in a post scarcity (star trek) world, but if the current ambitions of the top 1% is an indicator, it won't happen unless the politicians create a better tax system to compensate the loss of jobs. I doubt they will give up wealth and influence voluntarily.
- The Manage User menu item changes a user's status from active to inactive.
- A user's status is changed from active to inactive using the Manage User menu item.
https://en.wikipedia.org/wiki/E-Prime
E-Prime (short for English-Prime or English Prime, sometimes É or E′) denotes a restricted form of English in which authors avoid all forms of the verb to be.
E-Prime excludes forms such as be, being, been, present tense forms (am, is, are), past tense forms (was, were) along with their negative contractions (isn't, aren't, wasn't, weren't), and nonstandard contractions such as ain't and 'twas. E-Prime also excludes contractions such as I'm, we're, you're, he's, she's, it's, they're, there's, here's, where's, when's, why's, how's, who's, what's, and that's.
Some scholars claim that E-Prime can clarify thinking and strengthen writing, while others doubt its utility.
Show more -> Disclaimer and the feedback buttons are shown at the end. If you bothered enough to read the full response, you would have seen the disclaimer, but you never did, so you haven't. For something to be considered "misinformation," in the very least the subject of speech has to be asserting its truthfulness—and, indeed, Google makes no such claims. The claim they're making is precisely that its search result-embedded "[..] responses may include mistakes." In this specific case, they are not asserting truthfulness.
FWIW, Gemini 2.5 Pro answers the question correctly.
The search hints are clearly a low-compute first approximation, which is probably correct for most trivial questions which is probably the majority of user queries, and it's not surprising that it fails in this specific instance. The application doesn't allow for reasoning due to scale; even Google cannot afford to run reasoning traces on every search question. I concur that there's very little to discuss: you seemingly made up your mind re: LLM technology, and I doubt you will appreciate the breaking-up of your semantics to begin with.
And yes, I do have the ability to edit Wikipedia. Anyone can edit Wikipedia. I can go on any page I want, right now, and make changes to the page. If someone readds a lie, then eventually we can hit consensus as other editors enter the discussion. Wikipedia's basis is formed by consensus and controlled by individuals like you and I.
ChatGPT is not. It is controlled by one company; I cannot go edit the weights of ChatGPT to prevent it from lying about my app or anything else I do. I can only petition them to change it and hope that either I have enough clout or have a legal basis to do so.
>> Now we are engaged in a great civil war, testing whether that nation, or any nation so conceived and so dedicated, can long endure.
> This would not be improved by rewriting it as something like:
>> Now the Confederacy has engaged us in a great civil war [...]
It's technically possible to parse "we are engaged" as a verb in the passive voice.
But it's an error to think that's how you should parse it. That clause is using the active verb be, not the passive verb engage; it's fully parallel to "Now we are happy".
Wikipedia edits are monitored and vandalism is taken seriously, especially on the more important pages.
This is obviously because the current ruling class can't see what is coming. Historically speaking, the motivation for the elite to support social programs or reforms has been the instinct to preserve social stability, not altruism.
The New Deal did not happen because "the party thought that Social Security and unemployment insurance are necessary for the poor or middle class." It happened to prevent civil unrest and the rise of radical ideologies.
One of the strange properties of consciousness is that an entity with consciousness can generally feel pretty confident in believing they have it. (Whether they're justified in that belief is another question - see eliminativism.)
I'd expect a conscious machine to find itself in a similar position: it would "know" it was conscious because of its experiences, but it wouldn't be able to prove that to anyone else.
Descartes' "Cogito, ergo sum" refers to this. He used "cogito" (thought) to "include everything that is within us in such a way that we are immediately aware [conscii] of it." A translation into a more modern (philosophical) context might say something more like "I have conscious awareness, therefore I am."
I'm not sure what implications this might have for a conscious machine. Its perspective on human value might come from something other than belief in human consciousness - for example, our negative impact on the environment. (There have was that recent case where an LLM generated text describing a willingness to kill interfering humans.)
In a best case scenario, it might conclude that all consciousness is valuable, including humans, but since humans haven't collectively reached that conclusion, it's not clear that a machine trained on human data would.
There was no happy and smooth transition that you seem to allude to. The Luddite movement was in direct response to this: people were dying over this. Factory owners fired or massively reduced wages of workers, replacing many with child workers in precarious and dangerous conditions. In response, the workers smashed the machines that were being used to eliminate their jobs and prevent them from feeding themselves and their families (_not_ the machines that were used to make their jobs easier).
Down voters here on HN seem to live in a egocentric fantasy world, where every human being in the outside world live to serve them. But the reality is that business owners and leaders spend their whole day thinking about how to please their customers and their potential customers. Not other random people who might be misinformed.
Yes, AI text could be considered higher quality than traditional SEO, but at the same time, it's also very much not, because it always sounds like it might be authoritative, but you could be reading something hallucinated.
In the end, the text was still only ever made to get visitors to websites, not to provide accurate information.
If we want to draw some parallel this may trigger a robber baron kind of outcome more than an industrial revolution.
The existence of workable open weight models tips me more toward the optimistic outcome
Butthere's trillions at stake now and that must not be discounted it's the kind of wealth accumulation that can easily trigger a war. (And if you thinkit isn't you can look at the oil wars in the 90s and other more recent resources war bring fought in Europe today.
Expect "gpu gap" talks sooner that later, and notice there's a few global power with no horse to race.
This terminology is where we get the name of the "infinitive" form from, by the way.
As a rule of thumb, the nonfinite forms of a verb are its infinitives and participles. jcranmer used a passive participle, but all of his clauses are active. Unnoticed doesn't have a clause around it.
(He might have thought that go unnoticed is a passive form, perhaps of the verb notice (?), in which case that would just be an error.)
> how do I edit ChatGPT so it stops lying then?
This is what you have changed to:
> And yes, I do have the ability to edit Wikipedia.
You do not have the ability to edit Wikipedia so it stops lying, which is the relevant factor here.
As far as I understand Wikipedia’s rules, no it wouldn’t. Wikipedia considers it an edit war when there are three reverts in a 24 hour period. So if you remove a lie and somebody reverts that change, that is not considered an edit war. If they do it twice, it’s still not an edit war. If they do it three times, but over a greater time period than a single day, it’s still not an edit war. If somebody else re-adds the lie, it’s not an edit war either.
Regardless, it’s not important. The point I am making is that as a user, you cannot be sure that Wikipedia is not serving up lies to you.
The comparison of "we expect this (classical notation screenshot) but instead got this (ascii tab screenshot)" made me think that the only thing Soundslice supported was classical notation.
I don't know enough about English grammar to know whether this is correct, but it's not the assertion you took issue with.
Why am I not sure it's correct? If I say, "In addition to the blood so red," I am quite sure that "red" is not in the passive voice, because it's not even a verb. It's an adjective. Past participles are commonly used as adjectives in English in contexts that are unambiguously not passive-voice verbs; for example, in "Vito is a made man now," the past participle "made" is being used as an attributive adjective. And this is structurally different from the attributive-verb examples of "truly verbal adjectives" in https://en.wikipedia.org/wiki/Attributive_verb#English, such as "The cat sitting on the fence is mine," and "The actor given the prize is not my favorite;" we could grammatically say "Vito is a man made whole now". That page calls the "made man" use of participles "deverbal adjectives", a term I don't think I've ever heard before:
> Deverbal adjectives often have the same form as (and similar meaning to) the participles, but behave grammatically purely as adjectives — they do not take objects, for example, as a verb might. For example: (...) Interested parties should apply to the office.
So, is "made" in "the points already made" really in passive voice as it would be in "the points that are already made", is it deverbal as it would be in "the already-made points" despite its positioning after the noun (occasionally valid for adjectives, as in "the blood so red"), or is it something else? I don't know. The smoothness of the transition to "the points already made by those numbskulls" (clearly passive voice) suggests that it is a passive-voice verb, but I'm not sure.
In sibling comment https://news.ycombinator.com/item?id=44493969 jcranmer says it's something called a "bare passive", but I'm not sure.
It's certainly a hilarious thing to put in a comment deploring the passive voice, at least.
As I said elsewhere, one of the problems with the passive voice is that people are so bad at spotting it that they can at best only recognize it in its worst form, and assume that the forms that are less horrible somehow can't be the passive voice.
But, this story (and the GP comment) is not talking about "any person with an opinion". It's talking about actual ChatGPT users. People who've used ChatGPT as a service, and got false information from it. Even if they were free-tier users (do we even know that?), i think it makes sense for them to have some expectations about the service working somewhat correctly.
And in the concrete case of these LLM chat services, many people do get the impression that the responses they give must be correct, because of how deceptively sure and authoritative they sound, even when inventing pure BS.
That's a nice straw man you have there.
People telling lies on the internet is an old enough and well known enough issue that it’s appeared in children’s TV shows. One need only dive down the rabbit hole of 9/11 “truthers” to see how much completely made up bullshit is published online as absolute fact with authoritative certainty. AI is the hot new thing and gets all the headlines, but Scottish Wikipedia was a problem long before AI and long after society largely settled its mind about how reliable Wikipedia is.
I keep hearing this repeated over and over as if it’s a unique problem for AI. This is DEFINITELY true of human generated content too.
At the risk of derailing into insane pedantry land, this part is kinda true, so maybe not the best analogy?
From routing efficiency: https://www.ge.com/news/reports/ups-drivers-dont-turn-left-p...
And also safety: https://www.phly.com/rms/blog/turning-left-at-an-intersectio...
Can you insert an elided copula into it without changing the meaning and grammatical structure? I'm not sure. I don't think so. I think "In addition to the points already being made" means something different: the object of the preposition "to" is now "being", and we are going to discuss things in addition to that state of affairs, perhaps other things that have happened to the points (being sharpened, perhaps, or being discarded), not things in addition to the points.
In languages with more flexible word order, you could just switch the two without passive voice. You could just say the equivalent of "The ball kicked John," with it being clear somehow that the ball is the grammatical object and John the subject, without needing to use the passive voice at all.
> I don't know enough about English grammar to know whether this is correct, but it's not the assertion you took issue with.
The most natural interpretation is indeed that the participle made is being used as a full participle and not as a zero-derived adjective. For example, you could give it a really strong verbal sense by saying "the points already made at length [...]" or "the points made so many times [...]".
> So, is "made" in "the points already made" really in passive voice as it would be in "the points that are already made"
Though I wouldn't say the same thing there; if you say "the points that are already made", that pretty much has to be an adjective. If you want it to be a passive verb, go with "the points that have already been made".
Anyway, I would be really surprised if die-hard thoughtless style prescriptivists thought that the advice "don't use the passive voice" was meant to apply to participles. It's a quibble that you don't care about and they don't care about or understand. You're never going to get anywhere with someone by telling them they mean something they know perfectly well they don't mean.
Ok, sure, maybe this feature was worth having?
But if some people start sending bad requests your way because they can't or only program poorly, it doesn't make sense to potentially degrade the service for your successful paying customers...
The best LLMs available right in this moment will lie without remorse about bus schedules and airplane departure times. How in the world are businesses supposed to take responsibility for that?
Likewise if I have a neighbour who is a notorious liar tell me I can find a piece of equipment in a certain hardware store, should I be mad at the store owner when I don't find it there, or should I maybe be mad at my neighbour – the notorious liar?
> Anyway, I would be really surprised if die-hard thoughtless style prescriptivists thought that the advice "don't use the passive voice" was meant to apply to participles.
Presumably you mean phrases including participles, not participles by themselves. But https://languagelog.ldc.upenn.edu/nll/?p=2922 "The passive in English" says:
> The relevance of participles is that a passive clause always has its verb in a participial form.
So, what are you saying they do think it was meant to apply to, if every passive clause always includes a participle? I'm confused.
With respect to:
> Though I wouldn't say the same thing there; if you say "the points that are already made", that pretty much has to be an adjective. If you want it to be a passive verb, go with "the points that have already been made".
the passive-clause examples given in Pullum's blog post I linked above include "Each graduate student is given a laptop," which sounds structurally identical to your example (except that an indirect object is present, showing that it cannot be an adjective) and clarifies:
> The verb was doesn't really add any meaning, but it enables the whole thing to be put into the preterite tense so that the event can be asserted to have occurred in the past. Changing was to is would put the clause into the present tense, and replacing it by will be or is going to be would permit reference to future time; but the passive VP damaged by storms would stay the same in each case. (Notice, the participle damaged does not itself make any past time reference, despite the name "past participle".)
So it sounds like your grammatical analysis is explicitly contradicting Pullum's, which probably means you're wrong, but I'm not sure I understand it.
We can smuggle in presumptions through the use of attributive adjectives. In the above comment (which you might have noticed I wrote in E-Prime) I mentioned smuggling in "covert presumptions" of "essential attributes". If I had instead written that in assembly language as follows:
I smuggled in presumptions of attributes.
The presumptions were covert.
The attributes were essential.
it would clearly violate E-Prime. And that forces you to ask: does he intend "covert" to represent an essential attribute of those presumptions, or merely a temporary or circumstantial state relative to a particular temporal context? Did he intend "essential" to limit the subjects of discourse to only certain attributes (the essential ones rather than the accidental ones), and within what scope do those attributes have this purported essentiality? Universally, in every possible world, or only within the confines of a particular discourse?In these particular cases, though, I smuggled in no such presumptions! Both adjectives merely delimit the topic of discourse, to clarify that it does not pertain to overt presumptions or to presumptions of accidental attributes. (As I understand it, Korzybski objects to the "is of predication" not because no predicates exist objectively, but because he doubts the essentiality of any predicates.)
But you can use precisely the same structure to much more nefarious rhetorical ends. Consider, "Chávez kicked the squalid capitalists out of the country." Well, he kicked out all the capitalists! We've smuggled in a covert presumption of essentiality, implying that capitalism entails squalidity. And E-Prime's prohibition on the copula did not protect us at all. If anything, we lose much rhetorical force if we have to explicitly assert their squalidity, using an explicit statement that invites contradiction:
The capitalists are squalid.
We find another weak point at alternative linking verbs. It clearly violates E-Prime to say, "Your mother's face is uglier than a hand grenade," and rightly so, because it projects the speaker's subjective perceptions out onto the world. Korzybski (or Bourland) would prefer that we say, for example, "Your mother's face looks uglier to me than a hand grenade," or possibly, "I see your mother's face as uglier than a hand grenade," thus relativizing the attribute to a single speaker's perception. (He advocated clarity of thought, not civility.)But we can cheat in a variety of ways that still smuggle in that judgment of essentiality!
Your mother's face turned uglier than a hand grenade.
We can argue this one. Maybe tomorrow, or after her plastic surgery, it will turn pretty again, rather than having ugliness as an essential attribute. Your mother's face became uglier than a hand grenade.
This goes a little bit further down the line; "became" presupposes a sort of transformation of essence rather than a mere change of state. And English has a variety of verbs that we can use like that. For example, "find", as in "Alsup found Dahmer guilty." Although in that case "find" asserts a state (presumably Dahmer became guilty at some specific time in the past), we can also use it for essential attributes: I find your mother's face uglier than a hand grenade.
Or lie, more or less, about the agent or speaker: Your mother's face finds itself uglier than a hand grenade.
And of course we can retreat to attributive adjectives again: Your mother has a face uglier than a hand grenade.
Your mother comes with an uglier face than a hand grenade.
Or we can simply omit the prepositional phrase from the statement of subjective perception, thus completely erasing the real agent: Your mother's face looks uglier [...] than a hand grenade.
Korzybski didn't care about the passive voice much, though; E-Prime makes it more difficult but, mostly, not intentionally. As an exception, erasing the agent through the passive voice can misrepresent the speaker's subjective perception as objective: Your mother's face is found uglier than a hand grenade.
But that still works if we use any of the alternative, E-Prime-permitted passive-voice auxiliary verbs: Your mother's face gets found uglier than a hand grenade.
As Bourland said, I have "transform[ed] [my] opinions magically into god-like pronouncements on the nature of things".As another example, notice all the times I've used "as" here. Many of these times smuggle in a covert assertion of essential attributes or even of identity!
But I found it very interesting to notice these things when E-Prime forced me to rethink how I would say them with the copula. It seems like just the kind of mental exercise to heighten my attention to implicit assumptions of identity and essentiality that Korzybski intended.
I wrote the above in E-Prime, by the way. Just for fun.
If you are a store own, AND
1. People repeatedly coming in to your shop asking to buy something, AND
2. It is similar to the kinds of things you sell, from the suppliers you usually get supplies from, AND
3. You don't sell it
Then it sounds like your neighbour the notorious liar is doing profitable marketing for your business and sending you leads which you could profitably sell to, if you sold the item.
If there's a single customer who arrives via hallucination, ignore it. If there's a stream of them, why would you not serve them if you can profit by doing so?
There are obviously instances you'd ignore and you seem to be focussing on those rather than what OP was obviously talking about, repeat instances of sensible ideas
If you never use passive voice, you will be unable to emphasize the object of the sentence in cases where it might actually be necessary, and end up requiring more words to get the same effect.
If you never make left turns, you end up having to go past one block and make three right turns.
So even though best practices might be to avoid passive language for various reasons, sometimes it is cleaner. And even though best practices are to avoid left turns (for efficiency, safety, etc), sometimes it's worth it to just take the left turn. So even UPS trucks will make left turns, just not nearly as often.
I was just pointing out that English, due to its strict word order, is more reliant on the passive voice to change word order than less inflexibly-ordered languages.
To borrow from a sentence I used in an earlier comment, here's a fragment of Spanish.
"...sólo porque te impresionó un espectáculo de magia barato."
The equivalent English would be "...just because you were impressed by a cheap magic show."
The English sentence has to use the passive voice to put the verb "impress" at the beginning of that phrase, whereas you still use the active voice in Spanish, just with the word order putting the verb first.
>> The relevance of participles is that a passive clause always has its verb in a participial form.
> So, what are you saying they do think it was meant to apply to, if every passive clause always includes a participle? I'm confused.
OK, you're confused.
In the general case, an English verb has five forms†: "plain form" [go], "preterite form" [went], "present third-person singular form" [goes], "-ing form" [going], and "-en form" [gone].
The last two of those are participial forms.
It is true that a passive clause always has its verb in a participial form. We can be even more specific than that: the verb is always in -en form. This is true without exception because passive markers occur last in the sequence of auxiliary verbs that might modify a primary verb, and therefore always directly control the form of the primary verb.
It is not true that a passive clause always includes a participle, except in the sense of the name we give to the form of the verb. -ing and -en are "participial forms" because the verb takes one of those forms when it is a participle. But it can also take them for other reasons.
> the passive-clause examples given in Pullum's blog post I linked above include "Each graduate student is given a laptop," which sounds structurally identical to your example
Sure. If you wanted to put the present passive third-person plural form of make in that sentence, that form† would be are made. The sentence would have all the same words in the same order.
But that would make no semantic sense. For a point to be "already made", as opposed to having "already" been "made", you need to interpret made as an adjective, describing the state in which the point currently exists. The temporal structure of "each graduate student is given a laptop" differs from that of "in addition to the points that are already made" in a way that allows the present noncontinuous form of the verb. I don't think that works for "the points that are already made"; if I try to interpret that as a passive verb in the present tense, I get a strong sense that the sentence is malformed.
† You might notice that these two uses of the word form conflict with each other. The fact that form is used in both of these ways is why I'm annoyed at your comment conflating "participle" with "participial form". "Participle" is appropriate when you're talking about inflecting a verb according to how you want to use it in a sentence; it is a concern with the language's grammar. "Participial form" is appropriate when you're talking about the actual tokens that can appear in a sentence, with no regard to what they might mean or how they might be used; it is a concern with what you might think of as the language's "anatomy".
There's usually a good reason why a business might not offer something that people think they should offer. Usually it is that they can't be profitable enough at a price point which customers will accept.
Which way you should turn depends on where you are trying to go; which voice you should use depends on what you are trying to say, who your audience is, what you want to emphasize, and so on.
You can play tricks to come close to OSV and VSO for purposes of emphasis: "A vos un espectáculo de magia barato te impresionó", "Te impresionó un espectáculo de magia barato a vos," but the "te" is still obligatory. And you can do something similar in informal or poetic English: "Just because, you, a cheap magic show impressed you." But the passive offers more flexibility. I posted some other English examples yesterday in https://news.ycombinator.com/item?id=44493065.
But Spanish's inflectional structure is very much reduced from classical Latin, with a corresponding reduction in word-order flexibility. I think any of the six permutations discussed above would be perfectly valid in classical Latin, although my Latin is very weak indeed, so I wouldn't swear to it.
APOLOGIES MY GRENADE HAND YOUR TO PLEASE CONVEY
When you say "We can be even more specific than that: the verb is always in -en form," you're also contradicting Pullum when he points out that the present participle is also used to form certain passive clauses. An example of this is also given in the Wikipedia article, "I saw John eating his dinner."
So, while I certainly agree that I'm confused, I think you're mistaken. I'm not sure exactly how, because I'm not sure exactly what you're trying to express, but I'm sure that sufficient study of sources like those will make it clear to you.
(a) Yes, Geoff Pullum identified a use of the -ing form that is plausibly called passive. I hadn't looked at that before writing my comment.
(b) But you didn't. "I saw John eating his dinner" is not an example of that usage, and is not plausibly called passive in any sense. If you pulled that out of the wikipedia article you just linked, you should have seen the annotation of that very example noting explicitly that the participle is active.
> for example, https://en.wikipedia.org/wiki/Participle, which draws no distinction between "participle" and "participial form"
This isn't even true. Look at the opening sentence:
>> In linguistics, a participle (from Latin participium 'a sharing, partaking'; abbr. PTCP) is a nonfinite verb form that has some of the characteristics and functions of both verbs and adjectives.
Or here:
> The linguistic term, past participle, was coined circa 1798 based on its participial form, whose morphology equates to the regular form of preterite verbs.
We have a distinction between the participle and the form of the participle. The participle is, in this analysis, named after the form, though there is room for debate on that point.
Compare the sentence I've been there before", where been* is in a participial form but is not a participle† and, obviously, displays no characteristics or functions of adjectives.
The second (and final) mention of "participial forms" does confuse the two concepts:
> Some languages have extensive participial systems but English has only two participial forms
This is a conceptual error; for example, Latin has more participial forms (4, or if you really want to stretch it 48) than English, but its participial system is less extensive.
> I think you're mistaken. I'm not sure exactly how, because I'm not sure exactly what you're trying to express, but I'm sure that sufficient study of sources like those will make it clear to you.
You talk a surprisingly big game for someone who doesn't even claim to know what he's talking about.
† There is a school of formalism within linguistics that says that grammatical categories don't exist within a language unless they are reflected in the language's inflectional system. The best-known idea from this school is that English has a past tense but not a future tense. I don't find this plausible; compare https://en.wikipedia.org/wiki/Lexical_aspect , or just the general concept of an isolating or agglutinative language. On this analysis, but only on this analysis, you could say that in the phrase I've been there before, the token been is a participle. You'd have a lot of trouble explaining why it's a participle, though. On any other analysis, it's part of a finite verb, and by definition cannot be a participle.
But by that school of analysis, you would also have to say that English has no passive voice. (Except in the edge case using -ing forms.) And you run into some very awkward problems more or less immediately; when you look at the sentence "I'm going out for the evening", you have to claim that this sentence is fundamentally about being (the finite verb) rather than going out.
The rest of it I don't understand.