I work with old and new code bases used by many clients in complicated setups, but adding a warning to stderr while stdout was left untouched, and proper exit codes maintained, was hardly, if ever, a problem so far.
Of course, there's always some unpleasant exception, but it's rare.
And of course, I also understand that the author might have found themselves not only in one of those rare-ish instances, but also one where reasoning with the other side was fruitless.
Sounds like you don't use ffmpeg very often. Because ffmpeg is able to send its output to stdout to be piped to other apps, verbose text output can't use stdout as one would expect. Non-error text is sent to stderr instead. So when you want to trap the text output you have to route stderr to text file. It takes some getting used to, but it's now normal for me.
So, yeah, I know stderr vs stdout still, but it's not what you want it to simply be. In the real world, things are not as clean as they are in school books.
Ah yes, Schroedinger's workflow. So important any disruption is a disaster, and simultaneously so unimportant they couldn't possibly spend a single dime on the tools critical to the workflow.
If it's commercial software, you're paid to make it work, no matter how stupid that may be, and forced stupidity isn't your problem.
If it's FOSS, you can tell the user to deal with it and close the ticket.
- sometimes you can get the status code, sometimes you can't.
- sometimes you can separate out stdout from stderr, sometimes you can't
- sometimes the program generating the error message identifies itself, sometimes it doesn't
- sometimes you don't know if you have a "good error" (ok to ignore) or a "bad error" (cannot ignore)
I am a fan of the HARD FAIL.
I think internal unit tests or things like that should hard fail, then get a human to either fix it, or put in a hard exception.
if it is user-facing... sigh
Maybe the name “stderr” is a bit misleading. It’s totally common for non-error output to be in stderr as well, like verbose/debug logging.
We, uh, reasoned with the other side - we told them to fix their stupid broken setup.
1. If you mess up the command line to the program in a script or pipe, and get a bunch of usage output in stdout, a downstream consumer of that stdout might think its legit program output and try to parse it.
2. If your user actually calls the program with -h or --help, they might want to |less through it to read it on a small terminal screen. Output that to stdout.
3. Generally, you can always tell if something is going wrong by grepping for errors or warnings a single stream (stderr), or by looking for a nonzero exit code.
But your general principle applies: Output expected by the user -> stdout. Diagnostic output or output incidental to the program's operation or errors -> stderr.
1: https://pubs.opengroup.org/onlinepubs/9799919799/functions/s...
It reads as if the change was made to some library code that was depended upon by someone else's program that would "yay, done", which was in turn depended on by some workflow.
It's probably a non-starter to change library code so it hard fails if it detects its being used incorrectly, in situations where it previously ran and did something. That's a severe breaking change in behaviour.
Easing it in by printing a warning message sounds like a reasonable step toward hard failing. But then we get the situation yosefk relates.
I'll use ffmpeg as an example of being an edge case. It's hard to get ffmpeg to give a nonzero exit code. What might be a problem for the user wasn't necessarily a problem for the app, so the app thinks it is completed and does its thing exiting with zero. For example, if a file is being read as input that is corrupted causing ffmpeg to no longer be able to read from the source, it will happily close your file cleanly so it is usable (just shorter than expected) and report it completed successfully. If all you do is check the exit code, you'll think your file is completed. Much more due diligence is necessary to be sure.
If one wants to use a pager (like I sometimes do, though most of the time I just scroll up), they'll just use `foo 2>&1 | less`.
Gosh I thought the engineering culture was bad where I work.
Some applications have more trouble with setup and teardown than others. Like I knew a professor who kept sending me C programs that would crash before main() and some systems have a lot of trouble with "crash on shutdown" which might be a problem (corrupted files) or a non-problem.
If user asks a program to print help message, the help text is the processed command output!
1: https://www.gnu.org/prep/standards/html_node/_002d_002dhelp....
In the case of `git diff | grep FOO`, the diff output should go to stdout.
In the case of `git --help | grep FOO` the help output should go to stdout.
In the case of `git --omg-wtf | grep FOO`, it's fine if there is only output on stderr.
This really does not need to be an either/or. They have different uses. You can stick in 20 printfs and get a quick feel for where the bug is far quicker than stepping through the code - especially if you set a breakpoint and hit run, only to realise that you've overshot. You can run the program 10 times with different parameters and compare the results with printf much more easily than you could with a debugger. But, once you've found the rough area, a debugger is much better for fine grained inspection, and especially interrogating state with carefully written watches.
I do get your point about the risk of leaving in some trace by accident. But it feels like overkill to throw away such a valuable tool just because of that.
There's no good reason you shouldn't be able to have an IDE maintain a text overlay of debugging points which is solely supplied as breakpoint scripts to the debugger instead.
IDEs seem to conk out at click to set breakpoint.
Not having the debugger fully integrated into your integrated development environment is strictly a problem of the commercial Unix and open source crowd and their "Real Programmers are fine with stone knives and bearskins" machismo.
An industry veteran in my circles has recently made the rookie mistake1 of printing a warning from his code upon misuse. Surprisingly to nobody experienced, critical workflows soon came to a screeching halt.
It turned out that a program using his code prints something like “yay, done” upon exit, and scripts expect it to be the last thing it says. But now those warnings occasionally got printed from destructors or such, after the “yay, done”, making the scripts think the program failed.
One might think that this prompted people to fix the reported misuse, and that thought would be another rookie mistake. Instead, they were quick to point out that it’s hard to know where these warnings could come from, and we cannot risk all those critical workflows failing when some case of misuse surfaces in a new context.
I mean, you could grep to get an upper bound, and if you did, not that many places would come up. But one could then say, as some in fact did, that maybe you haven’t grepped everywhere you should have, and even the cases you did find are owned by many different teams, so we won’t get the fixes quickly enough, etc.
Several solutions were suggested by helpful high-ranking people:
When I was done scrolling his work chat with these helpful suggestions, our unfortunate industry veteran put on a melancholy smile and summarized the situation: “All means are fair except solving the problem.”2