Yeah, I had found one manifestation of something else that they fixed by the time someone looked at it. The fix in the notes didn't look anything like my bug, only by observing that it now worked I was able to figure out that I had been the blind man trying to describe an elephant.
Pretty standard process.
One being that the most recent version is on their cdn but not their [npm package](https://www.npmjs.com/package/livephotoskit?activeTab=readme) which was never updated for 7 years. You know what they did with this issue? They've marked it as "Unable to diagnose".
Also I've mentioned something about their documentation not being up to date for a function definition. This issue has remained open for 4 years now.
This is not too unusual. I've completely given up on bug reports, it's almost always a complete waste of my time.
I'm currently going around in circles with a serious performance issue with two different vendors. They want logs, process lists and now real time data. It's an issue multiple people have complained about in their forums and on reddit. The fact that this exact same thing is going on with TWO different companies ...
It is known for decades that Apple largely ignores bugreports.
A good compromise might be select high quality bugs or users with good rep and disable auto-closing for them. In the age of AI it shouldn't be too hard to correlate all those low quality duplicates and figure out what's worth keeping alive, no?
Yes, I hate it too.
Put yourself in the position of the employee on the other side. They currently have 647 bugs in their backlog. And they also have actual work to do that's not even related to these bugs.
You come to work. Over night there's 369 emails (after many filters have been applied), 27 new bugs (14 of which are against a previous version). You triage. If you think 8h is enough to deal with 369 emails (67 of which are actionable. But which 67?) and actually close 27 bugs, then… well then you'd be assigned another 82 bugs and get put on email lists for advisory committees.
Before you jump to "why don't they just…", you should stop yourself and acknowledge that this in an unsolved problem. Ignore them, let them pile up? That's not a solution? Close them? No! It's still a problem! Ask you to verify it (and implicitly confirm that you still care)? That's… a bit better actually.
"Just hire more experts"… experts who are skilled enough, yet happy to work all day trying to reproduce these bugs? Sure, you can try. But it's extremely not a "why don't they just…".
That's a classic trick where the developer will push back on the bug author and say "I can't reproduce this, can you verify it with the latest version?" without actually doing anything. And if it doesn't get confirmed then they can close it as User Error or Not Reproducible.
Of course, the only way to counter this is by saying "Yes I verified it" without actually verifying it.
I suspect that this is a common approach. It maybe even works, often enough, to make it standard practice.
For myself, I've stopped submitting bug reports.
It's not the being ignored, that bothers me; it's when they pay attention, they basically insist that I become an unpaid systems engineering QC person, and go through enormous effort to prove the bug exists.
When I close an old bug that is not actionable, I do feel bad about it. But keeping the bug open when realistically I can't really do anything with it might be worse.
At some point the leadership introduced an SLA for high then medium priority bugs. Why? because bugs would sit in queues for years. The result? Bugs would often get downgraded in priority at or close to the SLA. People even wrote automated rules to see if their bugs filed got downgraded to alert them.
Another trick was to throw it back to the user, usually after months, ostensibly to request information, to ask "is this still a problem?" or just adding "could not reproduce". Often you'd get no response. sometimes the person was no longer on the team or with the company. Or they just lost interest or didn't notice. Great, it's off your plate.
If you waited long enough, you could say it was "no longer relevant" because that version of the app or API had been deprecated. It's also a good reason to bounce it back with "is still this relevant?"
Probably the most Machiavellian trick I saw was to merge your bug with another one vaguely similar that you didn't own. Why? Because this was hard to unwind and not always obvious.
Anyone who runs a call center or customer line knows this: you want to throw it back at the customer because a certain percentage will give up. It's a bit like health insurance companies automatically sending a denial for a prior authorization: to make people give up.
I once submitted some clear bugs to a supermarket's app and I got a response asking me to call some 800 number and make a report. My bug report was a complete way to reproduce the issue. I knew what was going on. Somebody simply wanted to mark the issue as "resolved". I'm never going to do that.
I don't think you can trust engineering teams (or, worse, individuals) to "own" bugs. They're not going to want to do them. They need to be owned by a QA team or a program team that will collate similar bugs and verify something is actually fixed.
Google had their own versions of things. IIRC bugs had both a priority and s everity for some reason (they were the same 99% of the time) between 0 and 4. So a standard bug was p2/s2. p0/s0 was the most severe and meant a serious user-facing outage. People would often change a p2/s2 to p3/s3, which basically meant "I'm never going to do this and I will never look at it again".
I've basically given up on filing bug reports because I'm aware of all these games and getting someone to actually pay attention is incredibly difficult. So much of this comes down to stupid organizational-level metrics about bug resolution SLAs and policies.
I’ll fill out a bug report, wait a few days to a week to get a response, which are often AI generated, and then 48 hours afterward their bot marks it as stale. Telling me to check if it’s still broken or they assume it’s fixed lol
Each and every Radar (Apple's internal issue tracker is called Radar, and each issue is called a Radar) follows a state machine, going from the untriaged state to the done state. One hard-coded state in this is Verify. Each and every bug, once Fixed, cannot move to Closed without passing through the Verify state. It seems like a cool idea on the surface. It means that Apple assumes and demands that everything must be verified as fixed (or feature complete) by someone. Quite the corporate value to hold the line on, and it goes back decades.
I seriously hated the Verify state. It caused many pathologies. Imagine trying to run a burndown of your sprint when zero of the Radars are closed, because they have to be verified in production before being closed, meaning you cannot verify until after the release. Another pathology is that lots (thousands and thousands) of Radars end up stranded in Verify. Many, many engineers finish their fix, check it in, it gets released and then they move on. This led to a pathology that the writer of this post got caught up in: There is lots of "org health" reporting that goes out showing how many Radars are unverified and how long your Radars stay in the unverified state on average. A lot of teams simply close Radars that remain unverified for some amount of time because they are being "graded" on this.
I believe they also have attorneys. Perhaps that's how Apple could make bug-tracking more effective -- hire a prosecuting attorney and a defending attorney for each bug.
There is some bot that will match your issue to some other 3 vaguely related issue, then auto close in 3 days. The other vaguely related issues are auto closed for inactivity. Nothing is ever fixed, which is why they can't keep the thing from messing with your scroll position for years now.
If you're not testing your code under extreme latency it will almost certainly fail in all kinds of hilarious ways.
I spend a lot of time with 4G as my only internet connection. It makes me feel that most software is quickly produced, poorly tested, and thrown out the door on a whim.
The sentiment feels like software folks are optimizing for the local optimum.
It's the programmer equivalent of "if it's important they'll call back." while completely ignoring the real world first and second-order effects of such a policy.
Edit: this comment elsewhere in the thread is closer to my experience: https://news.ycombinator.com/item?id=47523107 Certainly in my own stint at Google I saw the same thing--bugs below a certain priority level would just never get looked at.
Sometimes I would advocate based on business reasons to fix the bug. Or to de-prioritize it or close it. I took every side possible, depending. As did the more pragmatic of the engineers.
I miss the give and take, if not the feeling of perpetual technical debt.
Well, it was trained on StackOverflow.
Or with open source projects. Fucking stalebot.
I'm not going to lie. That's not who I am. If Apple really wants to close a bug report when the bug isn't fixed, that's on their conscience, if they have one.
Mozilla is famous for having 20 year old bug reports that gets fixed after all that time.
Turns out there was a known bug in Microsoft schannel that had yet to be patched and they'd wasted weeks of our effort by not searching their own bug tracker properly.
Then you assume, naively, that this means that they've recognised that there really is a product problem and will go off and fix it. However, then in turn the support tech needs to reproduce the the issue to the development team.
They invariably fail to do so for any number of reasons, such as: This only happens in my region, not others. Or the support tech's lab environment doesn't actually allow them to spin up the high-spec thing that's broken. Or whatever.
Then the ticket gets rejected with "can't reproduce" after you've reproduced the issue, with a recorded video and everything as evidence.
If you then navigate that gauntlet, the ticket is most typically rejected with "It is broken like that by design, closed."
But back in the 80s and 90s, margins were significantly higher. If you look at hardware, I recall selling hardware with 30% margin, if not more... even 80% on some items.
Yet what came with that was support, support, support. And when you sell 5 computers a month, instead of 500, well.. you need that margin to even have a store. Which you need, because no wide-scale internet.
On the software side, it was sort of the same. I remember paying $80 for some pieces of software, which would be like $200 today. You'd pay $1 on an app store for such software, but I'd also call the author if there was a bug. He'd send an update in the mail.
I guess my point is, in those days, it was fun to fix issues. The focus was more specific, there was time to ply the trade, to enjoy it, to have performant, elegant fixes.
Now, it's all "my boss is hassling me and another bug will somehow mean I have to work harder", which is .. well, sad.
It was not fixed. So I took a video of myself refilling my water bottle, attached it to the ticket, and re-opened it. They actually fixed it after that. The video was 2m12s long (and I spent god knows how long making the video file small enough to attach to the ticket lol)
I don't think I've seen an issue of theirs that wasn't auto-closed.
The joke is that Apple owns the 17.x.x.x class-A range on the Internet (they got in early, the also have a second class-B and used to have a second class-B that they gave back), and what engineers were really saying is that they could not reproduce on the AD systems that Apple had setup (lots of times it was because AD had been setup with a .local domain, a real no-no, but it was in Microsoft's training materials as an example at the time...).
Apple did not say they couldn't reproduce it. Neither did they say that they thought they fixed it. They refused to say anything except "Verify with macOS 26.4 beta 4".
> and even if they are 100% reproducible for the user, it's not always so easy for the developers
It's not easy for the user! Like I said in the blog post, I don't usually run the betas, so it would have been an ordeal to install macOS 26.4 beta 4 just to test this one bug. If anything, it's easier for Apple to test when they're developing the beta.
> the most "efficient" thing is just to ask the user to re-test.
Efficient from Apple's perspective, but grossly inefficient from the bug reporter's perspective.
> realistically I can't really do anything with it
In this case, I provided Apple with a sample Xcode project and explicit steps to reproduce. So realistically, they could have tried that.
I suspect that your underlying assumption is incorrect: I don't think Apple did anything with my bug report. This is not the first time Apple has asked me to "verify" an unfixed bug in a beta version. This seems to be a perfunctory thing they do before certain significant OS releases, clear out some older bug reports. Maybe they want to focus now on macOS 27 for WWDC and pretend that there are no outstanding issues remaining. I don't know exactly what's going through their corporate minds, but what spurred me to blog about it is that they keep doing this same shit.
Yeah, I've done that. I find it much more honest than automatically closing it as stale or asking the reporter to repeatedly verify it even if I'm not going to work on it. The record still exists that the bug is there. Maybe some day the world will change and I'll have time to work on it.
I'm sure the leadership who set SLAs on medium-priority bugs anticipated a lot of bugs would become low-priority. They forced triage; that's the point.
> People even wrote automated rules to see if their bugs filed got downgraded to alert them.
This part though is a sign people are using the "don't notify" box inappropriately, denying reporters/watchers the opportunity to speak up if they disagree about the downgrade.
At the company I worked with (not Google, but a major one) this was the same. We used Salesforce, the "Lightning Experience" or whatever it was called [0]. Our version was likely customized for our company, but I think the idea was the same - one, I think the "priority", was for our eyes only, one was for the customer (the "severity"). If the customer was insistent on raising the severity, we'd put it as sev1, but the priority was what we actually thought it was. I was actually surprised that for the ~4 years I was there no one made the mistake of telling the customer the priority as a mistake, especially when a lot of people were sloppily copy-pasting text from Slack or other internal tools that sometimes referred to a case as either the severity or the priority.
Those were heavy customers with SLAs, though, not supermarket apps or anything like that.
What was sad was that our internal tools, no matter how badly written, with 90's UI and awful security practices, our tools were 50 times as fast as whatever Salesforce garbage we had to deal with. Of course, there was a lot of unneeded redundancy between the tools so the complexity didn't stay in the Salesforce tool. But somehow the internal tools written by someone 10 years ago, barely maintained, who had to still deal with complex databases of who-what-when-how, felt like you had the DB locally on a supercomputer while SF felt like you were actually asking a very overworked person to manually give you your query right on each click. I'm exaggerating, but just by a bit.
[0] That name was funny because it was slow as shit. Each click took 5 to 20 seconds to update the view. I wonder what the non-Lightning version was.
I think most teams use verify as a "closed" state to hide all that messiness. But sure, zero bugs is a project management fiction and produces perverse outcomes.
In this case the bug wasn't fixed.
> A lot of teams simply close Radars that remain unverified for some amount of time because they are being "graded" on this.
The simple solution here: you should also be graded on closing bugs that get re-opened.
1. Apple engineers actually attempted to fix the bug.
2. Feedback Assistant "Please verify with the latest beta" matches the Radar "Verify" state.
I don't believe either of those are true.
As do I.
> In the three years since I filed the bug report, I received no response whatsoever from Apple… until a couple of weeks ago, when Apple asked me to “verify” the issue with macOS 26.4 beta 4 and update my bug report.
The author is extremely lucky to even get a response. I’ve filed several issue reports (as an end user, not as a developer) on Feedback Assistant over the years. Not only do the issues not get fixed, but there’s nary a response or any indication that anyone has looked or is planning to look at it. Apple does not even bother to close my issue reports. They just stay open.
Sometimes, some issues may get fixed. But no notice of the fix being done. I’d never know at all.
So yes, I certainly do plead insanity.
Their customer service threw me around because fixing my locked git processes that their system locks you out of for security reasons was too much work for them. My project service was unusable and they just auto-closed the ticket after never following up on their commitments. That was despite my consistently putting in work for them and doing software engineering debugging and delivering to them why it needed to be manually reset on their end.
After I complained on a twitter post tagging their CEO, someone reached out again finally and expected me to open a brand new fresh ticket because "their system needs this". Ok yeah no thank you, the team avoiding responsibility by auto-closing unresolved tickets expects me to put in more work and open a new ticket because you can't figure out how to re-open one or create one on my behalf. Lazy.
Seriously, auto-closing issues that haven't seen activity in 3–6 months is one of the best things you can do for your project.
If nobody's touched it in that long, it's time to accept it's never getting prioritized -- it's just collecting dust and making your backlog feel way heavier than it actually is.
So let it go. Let it go! (It feels good to channel your inner Elsa!)
A clean backlog is a healthy backlog. You'll actually be able to find the stuff people care about instead of wading through years of noise. And if something truly matters? Don't worry... those issues come back, they always do.
You feeling accomplished by seeing an empty list is not the goal!
Plus, I’ve been in jobs where fixing bugs ends up being implicitly discouraged; if you fix a bug then it invites questions from above for why the bug existed, whether the fix could cause another bug, how another regression will be prevented and so on. But simply ignoring bug reports never triggered attention.
These auto-closing policies usually originate from somewhere else.
Of course, the developers should be determining if the bug may have a greater impact that will or does cause a problem that impacts revenue before closing it - not doing that is negligent.
As a software developer, I don't have any problem with this. If a bug doesn't bother somebody enough for them to follow up, then spend time fixing bugs for people who will. Apple isn't obligated to fix anybody's bug.
It's not like they were nagging him about it - it's been years, and they had major releases in the mean time. Quite possible it was fixed as a side effect of something else.
Large monopolistic tech companies like Apple and Microsoft can afford to ignore this stuff for years because there are few realistic alternatives. But longer term eventually a disruptive competitor comes along who takes product quality and customer service more seriously.
Not directed at you of course, just the proverbial “you” from the frustration of a purchaser of software.
- We owe you nothing! And the fact that people still expect maintainers to work for them is really sad, IMHO.
- Unlike corporate workers, nobody is measuring our productivity therefore we have no incentive to close issues if we believe they are unfixed. That means that when we close the issue, we believe it has a high chance of being fixed, and also we weigh the cost of having many maybe-fixed open issues against maybe closing a standing issue, and (try to) choose what's best for the project.
Microsoft support is guilty of this, especially for Azure & 365 issues.
Like sorry, but you aren't paying me to debug your software. Here's a report, and here's proof of me reproducing the problem & some logs. That's all I'm going to provide. It's your software, you debug it.
I've heard this from others before but I really don't understand the mindset.
What's the harm in keeping the bug open?
And then, no matter what I did, I could never, ever get a single word out of anyone about that case again. I often wonder if it's still open.
Since this is your typical automated bot garbage process, couldn't you just respond with your own bot voice and say it's "verified" and is still an issue?
I think as long as the issue isn't stuck with any one person then it's easier to leave open until it's actually fixed, like the 20+ year old Mozilla bug reports. Big corpo bureaucratic nonsense just ruins everything.
I want to draw out this comment because it's so antithetical to what Apple marketed that it stood for (if you remember, the wonderful 1984 commercial Apple created; which was very much against the big behemoths of the day and the way they operated).
We're at the point where we've normalized crappy behavior and crappy software so long as the bottom line keeps moving up and to the right on the graph.
Not, "Let's build great software that people love.", but "How much profit can we squeeze out? Let's try to squeeze some more."
We've optimized for profit instead of happiness and customer satisfaction. That's why it feels like quality in general is getting worse, profit became the end goal, not the by-product of a customer-centric focus. We've numbed ourselves to the pain and discomfort we endure and cause every single day in the name of profit.
:)
Funny at first but I’m coming around to that perspective
…yet
Closing bugs automatically after a cron job demanded that the user verify reproducibility for the 11th time: obviously bad.
Either it's quickly produced and thrown out the door as it's a startup trying to iterate and find market fit asap or because it's a bigcorp who's metrics are all not related to software.
I'm proud of fixing everything properly but I won't repeat it ever unless the company actually has that high a bar across the board.
We do this. Because frankly, very often the bug has been reported by others and has been fixed, we just can't connect the dots in our ticketing system.
That's of course less than ideal, but given that a lot of tickets we get are often very poorly described it's hard. It's one aspect I have genuine hope AI can help us with.
But I find that sometimes I can tell from experience that the IR is not actionable and that it will never be fixed. Some examples:
* There's not enough info to reproduce the issue and the user either can't or won't be able to reproduce it themselves. Intermittent bugs generally fall into this category. * The bug was filed against some version of the software that's no longer in production (think of the cloud context where the backend service has been upgraded to a newer version).
Sometimes the cost to investigate a bug is so high relative to the pain caused that it just closed as a WONTFIX. These sometimes suck the most because they are often legitimate bugs with possible fixes, but they will never be prioritized high enough to get fixed.
Or sometimes the bug is only reproducible using some proprietary data that I don't have access to and so you sometimes have no choice but to ask the bug filer "can you still reproduce this?".
Computer systems are complicated. And real-world systems consisting of multiple computer systems are even more complicated.
if the answer is 'everything in that part of the code has been rewritten' or 'yeah, that was a dup, we fixed that' or 'there isn't enough information here to try to reproduce it even if we wanted to' or 'this a feature request that we would never even consider' or some other similar thing, then sure delete it.
otherwise you're just throwing away useful information.
edit: I think this difference of option is due to a cultural difference between (a) the software should be as correct as reasonably possible and (b) if no one is complaining then there isn't a problem
But in the other cases, closing the bug seems to me to be a way to perturb metrics. It might be true that you'll never fix a given bug, but shouldn't there be a record of the "known defects", or "errata" as some call them?
For your specific scenarios:
- lack of information on how to reproduce or resolve a bug doesn't mean it doesn't exist, just that it's not well understood.
- For the "new version" claim, I've seen literal complete rewrites contain the same defects as the previous version. IMHO the author of the new version needs to confirm that the bug is fixed (and how/why it was fixed)
- I agree there are high cost bugs that nobody has resources to fix, but again, that doesn't mean they don't exist (important for errata)
- Similarly with proprietary data, if you aren't allowed to access it, but it still triggers the bug, then the defect exists
In general my philosophy is to treat the existence of open bugs as the authoritative record of known issues. Yes, some of them will never be solved. But having them in the record is important in and of itself.
https://devblogs.microsoft.com/oldnewthing/20241108-00/?p=11...
But I think modern industry pretends all it's fine to convince themselves that it's ok to chase the next feature instead.
Apple has done the best job of creating this expectation.
Apple Feedback = compliments (and ideas)
Public Web = complaints & bug reports
Apple Support = important bug reports (can create feedback first then call immmediately)
—
Prev comment w/link (2mo ago): https://news.ycombinator.com/item?id=46591541
Good luck doing that when the bug report (like virtually all bug reports in nature) doesn't provide sufficient reproduction steps.
You could sink an infinite amount of time investigating and find nothing. At some point you have to cut off the time investment when only one person has reported it and no devs have been able to reproduce it.
I agree with this iff it's being done manually after reading the issue. stalebot is indiscriminate and as far as "owing" the user, that's fair, but I'd assume that the person reporting the bug is also doing you a favor by helping you make things more stable and contributing to your repo/tool's community.
In any case I would have said it sounds difficult on every front
That way, you’re at least not deluding yourself about your own capacity to triage and fix problems, and can hopefully search for and reopen issues that are resurfaced.
When I developed software I would jump right on top of any bug reports immediately, and work until they were fixed. I was grateful to my customers for bringing them to my attention.
Error tracking and tracing make it fairly straight forward to retroactively troubleshoot unreproducible issues.
> this often isn't nefarious. It's a simple cost/benefit analysis of spending time on something that one user is complaining about versus a backlog of higher business priorities.
You can triage without closing tickets. So it is nefarious. It is metric hackingIf you're having trouble reproducing, tag "needs verification" or something else. But closing a ticket isn't triaging, it is sweeping problems under the rug
I will disagree there. The engineers often want to fix the bugs. Management is telling them they need it's all hands on deck for (Insert company goal here. Probably AI right now).
Followed by management also telling them they have too many bugs, of course. In a condescending tone.
I've seen this at a couple places... I think it's supposed to help model things like if something is totally down, that's an S0... But if it's the site for the Olympics and it's a year with no Olympics, it's not a P0.
Personally, that kind of detail doesn't seem to matter to me, and it's hard to get people to agree to standards about it, so the data quality isn't likely to be good, so it can't be used for reporting. A single priority value is probably more useful. Priority helps responsible parties decide what issue to fix first, and helps reporters guess when their issue might be addressed.
> People would often change a p2/s2 to p3/s3, which basically meant "I'm never going to do this and I will never look at it again".
I learned this behavior because closing with wontfix would upset people who filed issues for things that I understand, but am not going to change. I'm done with it, but you're going to reopen it if I close it, so whatever, I'll leave it open and ignore it. Stalebot is terrible, but it will accept responsibility for closing these kinds of things.
Sensible way would be probably something like this
* run cleaning yearly, on bugs say not touched (which is different than age!) for last 2 years * mark bug waiting for answer, add automated comment with "is it still happening/can you reproduce it on newest version?" * if that gets unanswered for say 3 months THEN close it.
that way at least it's "real" issue and even if solution is not being worked on maybe someone will see workaround that sometimes someone posts in the comment. Not create new one that gets closed for being duplicate...
Meanwhile I've seen shit as aisine as making bug stale 30 days after reporting.
That seems to be the case described in the article. In such a situation, I think it's dishonest to ask the reporter to expend even more effort when you've spent zero. Just close it if you don't want to do it, you don't have to be a jerk to your customers, too, by sending them off on a wild goose chase.
Otherwise, why not ask the reporter to reproduce the issue every single day until you choose to fix it in some unknown point in the future, and if they miss a day, it gets closed? That seems just as arbitrary.
"Please consider cosmic rays hitting the computer, defective ram chips, weird modifications of the system before submitting the bug. Unlesss you explicitly aknowledge that, your bug will be closed automatically in 30 days. Thank you very much"
The great thing about open source projects you can just fix the bug yourself and submit a PR, or fork the whole project if the maintainers won't merge your changes. If you don't have the time or skills yourself then you can even pay a contractor to do it for you.
"I deficated this issue. Closed."
If their week is already booked full just trying to keep up with the roadmap deadlines, a bug ticket feels like being tossed a 25lb weight when you're drowning.
You could say: "but have pride in your work!"
But if your company only values shipping, not fixing, that attitude doesn't make it through the first performance review.
That's a reasonable approach, but I don't understand how it's any more or less sane than autoclosing them with a stale label.
Whether these sorts of bugs are "open but stale" or "closed because stale" seems like it depends on whether the project defines "closed" as "no work planned" or "fixed", which both seem valid.
Either way these bugs will be hidden from developer dashboards but still available in the database so there's no practical difference, you just need to make sure everyone is on the same page about the meaning of "closed".
Do not throw the ball back with "this happens because bugs are not well written". Stalebot closes bug regardless of whether they are well written and ensures no one will put effort into writing another well written bug again.
I disagree. If you discover that a bug that makes an open source library unusable to you, after spending time on learning and using that library, and the authors close the bug as a wontfix, I think being annoyed is quite reasonable, even expected.
I agree regarding the need to triage at scale, unfortunately most large companies I've encountered fail to do this well and seem ill-equipped to accept high quality bug reports of edge case defects generated by expert users (save for the odd exception that arrives by social media from someone who happens to have enough followers to get their attention outside the regular support pipeline).
In my experience this doesn't usually boil down to a systems issue (the ticketing systems etc. exist that should theoretically allow for eventual escalation to the right engineer/developer) but a corporate culture thing (the company just doesn't prioritize customer feedback especially at the level where staff who actually deal with customers interface with the teams that write/maintain the software). Often it's genuinely valued at the C-level (the Bezos story of calling Amazon"s tech support line during an exec meeting is a fun example) but diluted somewhere between them and the rank-and-file.
(Ps. I'm not arguing with you and appreciate you took the time to craft a thoughtful reply)
Otherwise OSS is pretty much as-is, where-is, with the exception of very widely used and corporately supported projects.
My preference is to treat the defects like feature work, size and plan. Yes you might not get all the feature work done but the team is accountable for everything they make
This is entirely up to the maintainer, who puts in the work and gives up their time/money to do so. If you want to be in charge on a given repo, put in the work and become a real contributor, if not accept the rules the maintainers choose.
Obviously it's up to the maintainer. I'm saying what the maintainer should do, not what they can do.
You shouldn't presume to know what is best for an open source maintainer of any given project - projects vary, reports vary in quality, and the job of maintenance is not an easy one.
Previous: App Store developers: suggest your small ideas for improvement
Articles index
Jeff Johnson (My apps, PayPal.Me, Mastodon)
Why do I file bug reports with Apple Feedback Assistant? I plead insanity. Or perhaps addiction. I seesaw between phases of abstinence and falling off the wagon. I’ve even tried organizing a public boycott of Feedback Assistant, with a list of demands to improve the experience for users, but the boycott never caught on with other developers. Regardless, an incentive still exists to file bug reports, because Apple actually fixes some of my bugs. My main complaint about the bug reporting process is not the unfixed bugs but rather the disrespect for bug reports and the people who file them. Apple intentionally wastes our time with no regrets, as if our time had no value, as if we had some kind of duty to serve Apple.
In March 2023, I filed FB12088655 “Privacy: Network filter extension TCP connection and IP address leak.” I mentioned this bug report at the time in a blog post, which included the same steps to reproduce and example Xcode project that I provided to Apple. In the three years since I filed the bug report, I received no response whatsoever from Apple… until a couple of weeks ago, when Apple asked me to “verify” the issue with macOS 26.4 beta 4 and update my bug report.
I install the WWDC betas every year in June but don’t run OS betas after September when the major OS updates are released. I don’t have enough time or indeed enough Apple devices to be an unpaid tester year round. Thus, verifying issues in betas is a hassle for me. I’ve been burned by such requests in the past, asked by Apple to verify issues in betas that were not fixed, so I asked Apple directly whether beta 4 fixed the bug: they should already know, since they have my steps to reproduce! However, their response was evasive, never directly answering my question. Moreover, they threatened to close my bug report and assume the bug is fixed if I didn’t verify within two weeks! Again, this is after Apple silently sat on my bug report for three years.
Although I didn’t install the beta myself, I spoke to the developers of Little Snitch, who do run the macOS betas, and they kindly informed me that in their testing, they could still reproduce my issue with macOS 26.4 beta 4. It was no surprise, then, that when I updated to macOS 26.4, released to the public yesterday by Apple, I could still reproduce the bug with my instructions and example project. It appears that Apple knowingly sent me on a wild goose chase, demanding that I “verify” a bug they did nothing to fix, perhaps praying that the bug had magically disappeared on its own, with no effort from Apple.
By the way, a few weeks ago I published a blog post about another bug, FB22057274 “Pinned tabs: slow-loading target="_blank" links appear in the wrong tab,” which is also 100% reproducible but nonetheless was marked by Apple with the resolution “Investigation complete - Unable to diagnose with current information.” On March 9, I updated the bug report, asking what additional information Apple needs from me—they never asked for more information—but I’ve yet to receive a response.
I can only assume that some bozos in Apple leadership incentivize underlings to close bug reports, no matter whether the bugs are fixed. Out of sight, out of mind. Apple’s internal metrics probably tell them that they have no software quality problem, because the number of open bug reports is kept lower artificially.
Ironically, the iPadOS 26.4 betas introduced a Safari crashing bug that I reported a month ago, but Apple failed to fix the bug before the public release. What’s the purpose of betas? As far as I can tell, the purpose is just to annoy people who file bugs, without doing anything useful.
Shortly after this blog post hit the front page of Hacker News yesterday, my “Investigation complete - Unable to diagnose with current information” Feedback FB22057274 was updated by Apple. What an amazing coincidence! Unfortunately, the update was not helpful, because Apple requested a sysdiagnose. For a user interface issue! This was precisely the fear I expressed in my earlier blog post:
I honestly don’t know what additional information Apple needs to diagnose it. I included not only steps to reproduce but also multiple screen recordings to illustrate. I have a suspicion that Apple did not even read my bug report, because I did not attach a sysdiagnose report. But a privacy-violating sysdiagnose would not be useful in this case!
[…]
The only trick in my bug report is that I used Little Snitch to simulate a slow loading link. This was just the easiest way I could think of to reliably reproduce the bug. There are of course other ways to simulate a slow loading link; if Apple Safari engineers of all people somehow can’t figure that out, then they aren’t qualified for their jobs. Again, however, the more likely explanation is that my feedback was ignored because it did not include a pro forma sysdiagnose, but who knows, because Apple did not request more information of any kind from me.
Here is my response this morning to Apple’s request:
You shouldn't need a sysdiagnose, and I don't know how a sysdiagnose would possibly be helpful for a user interface bug.
I found an easy way to reproduce the issue without Little Snitch: use the Network Link Conditioner preference pane from the Xcode Additional Tools download, and create a profile with Uplink Delay 3000 ms.
The Xcode Additional Tools, which include a number of useful utilities, can be found in the Apple Developer Downloads (sign-in required).
Jeff Johnson (My apps, PayPal.Me, Mastodon)
Articles index
Previous: App Store developers: suggest your small ideas for improvement