>> is almost mythological. In 1953, C.R. Smith, president of American Airlines, was seated next to R. Blair Smith, an IBM salesman, on a cross-country flight. By the time they landed, the outline of a solution had been sketched. IBM and American Airlines entered a formal development partnership in 1959.
edit: oh and then the actual system didn't actually go live another 5 years later - in 1964. Over a decade after the two of them sat next to each other.
Reminder to myself when my potential customers don't sign the deal 5 minutes after my pitch!
The classic "the decision makers can take longer to buy than you can stay solvent" problem of enterprise sales.
Eat that, Bitcoin.
What kind of review would it fail? Sounds like it's pretty well designed to me.
An exec made a public quote that they couldn't have done it if they hadn't used Lisp.
(Today, the programming language landscape is somewhat more powerful. Rust got some metaprogramming features informed by Lisps, for example, and the team might've been able to slog through that.)
How many banks and ERP's, how many accounting systems are still running COBOL scripts? (A lot).
Think about modern web infrastructure and how we deploy...
cpu -> hypervisor -> vm -> container -> run time -> library code -> your code
Do we really need to stack all these turtles (abstractions) just to get instructions to a CPU?
Every one of those layers has offshoots to other abstractions, tools and functionality that only adds to the complexity and convolution. Languages like Rust and Go compiling down to an executable are a step, revisiting how we deploy (the container layer) is probably on the table next... The use case for "serverless" is there (and edge compute), but the costs are still backwards because the software hasn't caught up yet.
Closed the tab.
In a world where implementation is free, will we see a return to built for purpose systems like this where we define the inputs and outputs desired and AI builds it from the ground up, completely for purpose?
The system was a based on a military messaging system.
What is important to note is before SABRE the system used was a sell at will until a stop message was issued. Then sales would be on request. This method is still used between different airline systems today.
Before the implementation of SABRE airlines used teleprinters as a way of communicating. Some of the commands SABRE and other IBM 360 systems come directly from this period. For example AJFkSFO9MAR was a way of economizing on characters sent. It means what is the available seats from JFK to San Francisco on the 9th of March. This predates SABRE.
There is several reasons that the System 360 (the reservation systems used by airlines like SABRE) is one that it is written in Assembler, and also the logic is very tied into its role of reservation. For example it was designed in the days of punchcards, which have a totally different method of matching than a relational Database. The logic is still used on matching a seat to a fare.
On the pure speed much of it is gained by clever engineering tricks. An example would be the passenger record. This is 9 alphanumeric id of the passenger reservation. It is the hash of virtual memory location of the reservation. It takes 4 cpu cycles to retrieve it.
It’s nothing for even an ancient CPU - let alone our modern marvels that make a Cray 1 cry.
The key is an extremely well-thought and tested design.
(For the pedantic, it's not exactly centralized nor federated since each airline treats their view of the world as absolutely correct)
People don't do it because it's not fashionable (the cool kids are all on AWS with hundreds of containers, hosting thousands micro services, because that's web scale).
"That is not coincidence — it is the market discovering the optimal solution to a specific problem. When you see that pattern in your own domain, pay attention to it."Site is likely SEO slop for future product placement.
Either way I'm glad I read it and waiting for the other parts of the series. Really curious how to get access to this airline booking data so I can write my own bot to book my flights and deal with all the permutations and combinations to find the best deal.
It probably doesn’t require consensus among all participants (pairwise consensus at every step should be fine), so there is very likely no voting.
It’s not even permissionless. It’s not like a random company could join this “chain” simply because they can generate a keypair.
It’s a fundamentally different problem, and it makes sense that the architecture is different.
It was fast on an 4.77Mhz IBM PC, and much faster on a 10Mhz V20.
50,000 transactions was pretty standard for a IBM Mainframe, now? The z/ Series is still about the same, but it scales up to 32 processors. ( excuse me, billions. per day )
But yes, you don’t always need cool technologies.
> Convergent evolution is real. Every major GDS independently arrived at the same underlying platform. That is not coincidence — it is the market discovering the optimal solution to a specific problem.
I struggle to understand the claim that GDSes “arrived independently” at interoperability standards through “convergent evolution” and market discovery. Isn’t it something closer to a Schelling point, or a network effect, or using the word “platform” to mean “protocol” or “standard”?
Isn’t it like saying “HTML arose from web browsers’ independent, convergent evolution”? Like—I guess, in that if you diverge from the common standard then you lose the cooperative benefits—see IE6. And I guess, in that in the beginning there was Mosaic, and Mosaic spoke HTML, that all who came after might speak HTML too. But that’s not convergent evolution, that’s helping yourself to the cooperative benefits of a standard.
“The market” was highly regulated when the first GDSes were born in the US. Fares, carriers, and schedules were fixed between given points; interlining was a business advantage; the relationships between airlines and with travel agents were well-defined; and so on [0]. IATA extended standards across the world; you didn’t have to do it the IATA way, but you’d be leaving business on the table.
If anything, it seems like direct-booking PSSes (he mentioned Navitaire [1]) demonstrate the opposite of the LLM’s claim. As the market opened up and the business space changed, new and divergent models found purchase, and new software paradigms emerged to describe them. It took a decade or two (and upheaval in the global industry) before the direct-booking LCC world saw value in integrating with legacy GDSes, right?
…the LLM also seems bizarrely impressed that identifiers identify things:
> One PNR, two airlines, the same underlying platform.
> Two tickets, two currencies of denomination, one underlying NUC arithmetic tying them together.
> One 9-character string, sitting in a PNR field, threading across four organisations' financial systems.
[0] https://airandspace.si.edu/stories/editorial/airline-deregul...
[1] https://www.phocuswire.com/Jilted-by-JetBlue-for-Sabre-Navit...
Also, try to retrieve a PNR on an airline website or do like anything on the airline's own website -- the UX is usually pretty bad and the data loading takes forever. For that too the GDS is to blame.
Run time - This makes development faster. Python, Lua, and Node.js projects can typically test out small changes locally faster than Rust and C++ can recompile. (I say this as a pro Rust user - The link step is so damned slow.)
Container - This gives you a virtual instance of "apt-get". System package managers can't change, so we abstract over them and reuse working code to fit a new need. I am this very second building something in Docker that would trash my host system if I tried to install the dependencies. It's software that worked great on Ubuntu 22.04, but now I'm on Debian from 2026. Here I am reusing code that works, right?
VM - Containers aren't a security sandbox. VMs allow multiple tenants to share hardware with relative safety. I didn't panic when the Spectre hacks came out - The cloud hosts handled it at their level. Without VMs, everyone would have to run their own dedicated hardware? Would I be buying a dedicated CPU core for my proof-of-concept app? VMs are the software equivalent of the electrical grid - Instead of everyone over-provisioning with the biggest generator they might ever need, everyone shares every power station. When a transmission line drops, the lights flicker and stay on. It's awe-inspiring once you realize how much work goes into, and how much convenience comes out of, that half-second blip when you _almost_ lose power but don't.
Hypervisor - A hypervisor just manages the VMs, right?
Come on. Don't walk gaily up to fences. Most of it's here for a reason.
> But yes, you don’t always need cool technologies.
That's kinda the irony mainframes are incredibly cool piece's of tech, just not fashionable. They have insane consistency guarantee at the instruction level. Hot swapping features etc. Features you'd struggle to replicate with the dumpster fire that is modern microservice based cloud computing.
Your argument for host os, virtual os, container is the very point im making. Rather than solve for security and installablity, we built more tooling, more layers of abstraction. Each have overhead, security surface and complexity.
Rather than solve Rusts performance (at build time), switch to a language that is faster but has more overhead, more security surface, more complexity.
You have broken down the stack of turtles that we have built to avoid solving the problem, at the base level...
SABRE, what the article is discussing, is the polar opposite of this, it gives us a hint that more layers of abstraction arent always the path to solutions.
Part 1 of 6 in the Iron Core series — the 60-year-old infrastructure that flies 4.5 billion people a year.
In December 2025, someone at Technogise opened MakeMyTrip's corporate platform, typed in a destination, and booked me two flights to London. The whole thing took under a minute. A confirmation email landed in my inbox. Six-character booking references appeared: DDTCIV and DHB4AL.
I was going to speak at ContainerDays 2026. A conference about containers, orchestration, and cloud-native infrastructure — the kind of modern, ephemeral, stateless systems I spend my working life thinking about.
The irony only hit me on the flight over.
The infrastructure that booked those flights was designed in 1960. It runs on an operating system that predates Unix. It speaks a command language built for teletypes. It has been running continuously, without a full rewrite, for over six decades — and it handles roughly 10,000 transactions per second at peak.
I build distributed systems. I thought I understood complex infrastructure. Then I looked at my own boarding pass and pulled the thread.
This is a six-part series about what I found.
To understand why this infrastructure exists, you need to understand the problem it was built to solve.
By the mid-1950s, American Airlines was managing reservations on index cards. A booking required a phone call to an agent, who would search physical card racks across multiple city offices, confirm availability verbally, and call the passenger back. A transatlantic reservation could take 90 minutes to confirm. The airline was processing roughly 85,000 reservation requests a day across 50-plus cities. The system was collapsing.
The origin of what would become the GDS — Global Distribution System — is almost mythological. In 1953, C.R. Smith, president of American Airlines, was seated next to R. Blair Smith, an IBM salesman, on a cross-country flight. By the time they landed, the outline of a solution had been sketched. IBM and American Airlines entered a formal development partnership in 1959.
The result was SABRE — Semi-Automated Business Research Environment. It went live in 1964.
The same year the IBM System/360 was announced. Three years before the first ATM. Five years before the moon landing. Fifteen years before the first VisiCalc spreadsheet.
Within a decade, every major airline followed suit:
| GDS | Founded | Original Owner | Tech Foundation |
|---|---|---|---|
| SABRE | 1964 | American Airlines + IBM | IBM ACP / TPF |
| Apollo | 1971 | United Airlines | IBM TPF |
| Galileo | 1987 | United + BA + KLM + Swissair | IBM TPF |
| Worldspan | 1990 | Delta + Northwest + TWA | IBM TPF |
| Amadeus | 1987 | Air France + Lufthansa + Iberia + SAS | Bull mainframe → Unix |
Notice the common thread. They all converged on the same underlying runtime. Which brings me to the piece of technology that most software engineers have never heard of, and which is almost certainly processing a flight booking somewhere in the world as you read this.
Transaction Processing Facility (TPF) is an IBM mainframe operating system descended from ACP, American Airlines' original Airline Control Program. It was designed for one purpose: processing enormous volumes of simple transactions with sub-millisecond response times.
It is not Unix. It does not share Unix's lineage, its philosophy, or its abstractions. It predates Unix by a decade.
Understanding TPF requires setting aside almost everything you know about modern operating systems:
| Property | TPF | Modern OS |
|---|---|---|
| Process model | No processes. No threads. Short-lived "programs" that execute and exit. | Processes, threads, coroutines |
| Memory model | Fixed memory "cells" per transaction. No heap. No dynamic allocation. | Virtual memory, heap, GC |
| I/O model | Extremely fast synchronous I/O to DASD (Direct Access Storage) | Async I/O, block storage, NVMe |
| Scheduling | Preemptive, priority-based, microsecond granularity | Typically millisecond granularity |
| Failure model | Transaction-level rollback. The system does not crash — the transaction does. | Depends on application |
| Primary language | Assembler. C was added later. | Everything |
The key insight is that TPF is not really an OS in the way you think of one. It's closer to what we would now call a transaction runtime — a system purpose-built to receive a unit of work, execute a short program against it, commit state changes, and immediately move on. No daemons. No background threads. No connection state persisted in memory between transactions.
This design was made for one workload. It is exceptionally good at that workload.
Modern TPF-based systems handle around 10,000 transactions per second under normal conditions. During a fare sale — when millions of customers simultaneously discover that flights are cheap — that number can reach 50,000 TPS. End-to-end message round-trip: roughly 100 milliseconds.
In the 1990s, when every other industry was migrating off mainframes to Unix, airlines looked at the performance numbers and stayed put. The replacements couldn't match the throughput. Many still can't. The IBM Z-series mainframes running z/TPF today are not running it out of nostalgia. They are running it because nothing else has beaten it for this specific job in 60 years.
There is a lesson in that. I will come back to it.
When Technogise booked my ContainerDays travel through myBiz, the booking touched a specific layer of this ecosystem. MakeMyTrip uses Amadeus as its GDS — the system born from a 1987 partnership between Air France, Lufthansa, Iberia, and SAS, and now the dominant GDS across Europe, India, and much of Asia-Pacific.
Amadeus is not running on the original 1987 Bull mainframe. It migrated to Unix in the 1990s, and has since moved progressively toward a more modern architecture. But the data model, the protocol, and crucially the command language that agents use — cryptic mode — remain continuous with the original 1960s design. The format of my PNR, the structure of my e-ticket, the way the fare is calculated: all of it follows conventions established before I was born.
My return flights had a further complication. The outbound was entirely Air India — DDTCIV, NAG→DEL→LHR. Air India runs on Amadeus Altéa, a modern PSS (Passenger Service System) built on top of the Amadeus infrastructure. They migrated to it in 2023, replacing a legacy SITA system in one of the largest airline PSS migrations in Asian aviation history.
The return — DHB4AL, MAN→LHR→DEL→NAG — mixed British Airways (who also runs on Amadeus Altéa) and Air India. One PNR, two airlines, the same underlying platform. That consistency is what made the booking work. It is also what made the re-accommodation work when things went wrong — and things did go wrong.
But I am getting ahead of myself.
There is another major Indian airline worth understanding before we go further: IndiGo.
IndiGo — the largest airline in India by market share — does not use Amadeus. It uses Navitaire, a PSS built specifically for low-cost carriers, now owned by Amadeus but operated as a separate product. Navitaire's NewSkies platform is purpose-built for high-volume, low-margin, point-to-point flying — no interline, no complex fare construction, no legacy baggage.
This is a deliberate architectural choice. Navitaire is cheaper to operate, faster to configure, and optimised for the IndiGo model: high frequency, fixed pricing, minimal complexity. The trade-off is reduced interoperability. IndiGo distributes inventory into Amadeus for travel agent bookings — you can see 6E flights in a cryptic availability display — but the ticketing and check-in systems are entirely Navitaire.
The split matters when something goes wrong. An IndiGo delay affecting an Air India connection does not trigger automatic re-accommodation between systems. The human has to intervene.
| Airline | PSS | GDS Distribution |
|---|---|---|
| Air India (AI) | Amadeus Altéa | Amadeus (primary) |
| IndiGo (6E) | Navitaire NewSkies | Amadeus / Sabre (via distribution layer) |
| Vistara (absorbed into AI) | Amadeus Altéa | Amadeus |
| Air India Express | Navitaire | Amadeus / Sabre |
When myBiz confirmed my booking in December 2025, the following sequence fired:
Technogise travel admin (myBiz corporate portal)
↓
MakeMyTrip OTA layer (availability check, pricing)
↓
Amadeus GDS (seat inventory, PNR creation)
↓
Air India Altéa PSS (segment confirmation, HK status)
↓
IATA BSP (Billing Settlement Plan) — payment routing
↓
E-ticket issued under Air India numeric code 098
↓
PNR DDTCIV created, stored in Amadeus
↓
Confirmation email → myBiz → Technogise → me
Each arrow is a system boundary. Each boundary has its own protocol, its own failure mode, and its own eventual consistency characteristics. The 30-second booking conceals a chain of synchronous and asynchronous calls across systems built in different decades by different companies in different countries.
The PNR at the end of that chain — six characters, DDTCIV — is the thread that holds it all together.
In the next part, I will decode exactly what those six characters are, what they contain, and why the fare calculation line on my e-ticket is one of the most information-dense strings in commercial aviation.
Fitness for purpose beats fashionable architecture. TPF is not modern. It would fail every architectural review a contemporary engineering team would apply to it. It also handles 50,000 transactions per second with sub-100ms latency on hardware that costs a fraction of an equivalent cloud footprint. It has been doing this for 60 years. The lesson is not that old software is good software — it is that the right tool for the right job, well-maintained, is very hard to beat.
Convergent evolution is real. Every major GDS independently arrived at the same underlying platform. That is not coincidence — it is the market discovering the optimal solution to a specific problem. When you see that pattern in your own domain, pay attention to it.
Migrations are expensive. Air India's move to Amadeus Altéa in 2023 was years in the making. A company the size of an airline, with decades of booking history, interline agreements, loyalty programme integrations, and airport systems dependencies, cannot simply "lift and shift." The scar tissue from that migration is still visible in the industry. I will come back to it in Part 4.
Next: Part 2 — Six Characters. What DDTCIV actually is, what it contains, and why it is less unique than you think.
The Iron Core is a six-part series by Ajitem Sahasrabuddhe. Ajitem is a software engineer at Technogise and spoke at ContainerDays 2026 in London.