Go watch it. Great show.
When the show came out I thought it must have been created by one of my classmates because the title is so arcane. Turns out it wasn't but the show definitely captures the vibe of computing in Austin and Dallas in the 80s.
I suspect they hooked me with "byte" and "nybble"… And it just got better the more immersed I got in the history, Jargon Files…
They introduced Cameron Howe as some sort of world class hacker that could do anything so one of her first scenes was her typing something.. and typing she did, one finger at a time.
I mean, wtf.
World class hacker that literally types one finger at a time, like she had never used a keyboard before.
That scene nearly made me quit the show right there and then.
Whenever I see that actress in something else I just can't help but think back about she couldn't even be bothered to learn how to type.
(The PET had its own monitor that, unlike common composite monitors of the era, apparently would not continue to scan when the sync went away)
Vladimir Horowitz very famously played a televised concert back in the 80s where, for the first time, a few cameras stayed focused closely on his hands. He had horrible technique. It was horrible by his own professed standard: for most of the fundamental things he himself taught to his students, he was doing the opposite! This was broadcast to millions of people. Piano teachers everywhere were pissed.
While that bad technique isn't particularly noticeable in the resulting sound for that concert, there's an analysis somewhere that shows the damage it did as he aged. You can hear certain problems he was having in his later recordings, and video from the same period confirms that the bad technique (like straining the wrist on octaves) was the culprit[1].
In any case, all kinds of world class people do all kinds of fucked up shit.
Edit:
1: In other words, when he was middle-aged he could play octaves accurately with a strained wrist, but he couldn't do that in old age. However, if he had been leveraging the weight/power of his entire arm for the octaves, he would have gotten accuracy in both cases.
2: IIRC, he didn't realize what his technique looked like until someone showed him the video. :)
You are looking for the wrong badge.
What broke the show for me was some hot peroxide blonde doing what was really done by a slightly dumpy guy in an isolated office.
I just can't watch shows that fictionalize history from my field of work. My dad's a musician and he's the same with his field.
I'm fine with that. I read the history book or watch the documentary instead.
I recommend it at every chance I get, but few people ever watch it. They're more likely to give Silicon Valley a try.
I still can’t ‘properly’ touch-type.
Great intro too:
Most actors and directors put a lot of thought into small details like this, so when you see something like this it’s often intentional.
If anyone else loved these actors watching HACF, I would recommend watching The Fall (Pace), Fargo S3 (McNairy) and Station Eleven (Davis).
E.g.,
> Texas Instruments was founded by Cecil H. Green, J. Erik Jonsson, Eugene McDermott, and Patrick E. Haggerty in 1951. McDermott was one of the original founders of Geophysical Service Inc. (GSI) in 1930. McDermott, Green, and Jonsson were GSI employees who purchased the company in 1941. In November 1945, Patrick Haggerty was hired as general manager of the Laboratory and Manufacturing (L&M) division, which focused on electronic equipment.[14] By 1951, the L&M division, with its defense contracts, was growing faster than GSI's geophysical division. The company was reorganized and initially renamed General Instruments Inc. Because a firm named General Instrument already existed, the company was renamed Texas Instruments that same year.
* https://en.wikipedia.org/wiki/Texas_Instruments
And how it got in contact with military contracts:
> TI entered the defense electronics market in 1942 with submarine detection equipment,[41] based on the seismic exploration technology previously developed for the oil industry. The division responsible for these products was known at different times as the Laboratory & Manufacturing Division, the Apparatus Division, the Equipment Group, and the Defense Systems & Electronics Group (DSEG).
* Ibid
Unfortunately, in my many moves it has disappeared, though I still have the schematics for it.
Somehow I missed the boat on being a billionaire!
The show is much more, and much better, than that though. I’m glad I kept watching.
They really captured the urge to build things in tech, and the problems that come with it. HACF, Silicon Valley, and The Soul of a New Machine are a trifecta.
I have never watched the AMC show Halt and Catch Fire, and for a long time I only knew the title, but nothing about the show. Something about it always reminded me of programmer humor: somewhat dramatic, a little absurd, and weirdly precise. Turns out, the show really is about the computer industry in the 1980s and 1990s, but the phrase itself is much older than the show, and it started as some engineering humor.
In the context of computers, Halt and Catch Fire (shortened to HCF) has been generalized to describe machine-code that causes the CPU to stop doing anything useful, forcing you to only recover by resetting (or power-cycling) the machine. In a pretty literal sense, it "halts" the machine. Sure, the "catch fire" part is a joke, but it's not as far-fetched as it might seem. Take the IBM System/360 for example. Apparently when this system would encounter a certain invalid opcode, it would constantly access a specific location in the magnetic core memory, which caused it to get very hot and even catch on fire.
Over time, HCF also became a catch-all label for undocumented or invalid opcodes that lock up the processor, intentional test modes that look like a hang, and real hardware bugs (you might even recall that some early Pentium-class chips could be locked up with a carefully chosen illegal instruction, known as the F00F bug - more on that later).
The phrase was created, in part, because of the standard of using three-letter assembly mnemonics: ADD, CMP, JMP, etc. The joke spread in various publications, including HCF alongside some other personal favorites of mine:
EPI: Execute Programmer ImmediatelyDC: Divide and ConquerCRN: Convert to Roman NumeralsSo HCF was mostly just a joke, until it wasn't.
The Motorola 6800 has 256 single-byte opcodes, but not every bit pattern corresponds to a documented instruction. Hit the wrong one and the chip does whatever the silicon "decodes" into — sometimes nothing much, sometimes something significant.
Gerry Wheeler's BYTE piece "Undocumented M6800 Instructions" ran in December 1977, volume 2 number 12, in the Technical Forum on pages 46–47. He starts from Motorola's own docs: 197 documented opcodes, which leaves 59 bit patterns unaccounted for in the official story. Some of those behave like NOPs, some change the condition-code register in patterns Wheeler said were still "undeciphered" at the time, and two bytes — $9D and $DD — share one especially nasty outcome he chose to call Halt and Catch Fire. He is explicit that "The mnemonics are, of course, assigned by me."
What actually happens, in hardware terms, is that the part stops behaving like a normal fetch–decode–execute engine: the program counter keeps advancing and the chip issues reads while the address lines march through memory like a hardware counter. Interrupts will not stop you from your path of self-destruction - you only get out of the loop with a reset or power cycle. Wheeler's own description is worth reading in the original, rather than paraphrasing:
When this instruction is run the only way to see what it is doing is with an oscilloscope. From the user's point of view the machine halts and defies most attempts to get it restarted. Those persons with indicator lamps on the address bus will see that the processor begins to read all of the memory, sequentially, very quickly. In effect, the address bus turns into a 16 bit counter. However, the processor takes no notice of what it is reading... it just reads.
On the "catch fire" half of the phrase, he adds, "Well, almost." While the IBM system did catch fire in cases, it seems that the Motorola 6800 did not.
Outside BYTE, the same opcode family picked up other nicknames. David J. Agans, in Debugging (2002), remembers DD as what his team called the "Drop Dead" instruction—same bus-walking trick, different name—and notes that engineers used it deliberately because "all of the address and clock lines were nice, cycling square waves" on a scope.
Most machines only feel like they hang. On at least one early 6800 microcomputer with finicky memory-mapped video, the pattern can show up as visible "snow". Ben Z's Sphere News article is a great rabbit hole to go down (video RAM arbitration, timing, CRT artifacts).
Years later, Motorola engineers wrote more about this same issue. In IEEE Design & Test (1985), Daniels and Bruce describe an illegal opcode that customers found on the MC6800, internally nicknamed HACOF, where the program counter could increment forever until reset. They also tell of a detail that's almost hard to believe: product engineering wanted a fast way to scan RAM during bring-up, recognized this behavior already did something like that, and basically kept it instead of paying to remove it. So, in the words of Bob Ross, this turned out to be a "happy accident".
More recently, someone actually put a real MC6800 on hardware and actually measured the thing instead of just trusting scans of old magazines and Wikipedia pages. Doc TB's lab write-up notes an interesting detail: after the opcode is fetched, there is a delay on the order of tens of milliseconds before the address lines settle into the famous fast counting pattern, and there are other undocumented opcodes that look like slower or glitchier versions of the same idea.
Motorola wasn't the only modern processor with this issue: illegal 6502 opcodes that lock the CPU, the Pentium F00F bug (if you remember that era), pairs of instructions on some architectures that wait forever for an interrupt that cannot arrive, and modern x86 fuzzing talks where people still turn up invalid states in huge processors.
Basically, with fuzzing they're trying to set random or unexpected data to the processor to help identify vulnerabilities or bugs in the processor. Unsurpisingly, it's a pretty effective strategy, and not just for processors, but for all kinds of software as well.
This was a fun bit of history to research - and there turned out to be much more to it than I expected, even regarding the "catch fire" part. As a lot of software moves up the stack, it's easy to lose sight of the hardware from our 10,000 foot view. In the end, it's just a bunch of silicon wired together in a way that can sometimes go wrong.
All I know is that this phrase is too good to not use - expect a future project (or company) to use the "HCF" acronym.
To keep you busy, here are some links to the sources and more info: