that would be very annoying way to write e-mail and no less prone to typosquatting (if anything, more)
Both standards lacked hindsight we have today but x.400 would just be added complexity (as years of tacked-on extensions would build upon it) that makes non-error-prone parsing harder
Not even then, when people with access to computers were probably in the thousands, would anyone liked to type "C=no; ADMD=; PRMD=uninett; O=uninett; S=alvestrand; G=harald" just like in the example of the article.
Just seeing that X.400 notation is giving me bad memories!
Yes - and this is actually really important! It's true of most of the important early internet technologies. It's the entire reason "internet" standards won over "telco" (in this case ITU) standards - the latter could only be deployed by big coordinated efforts, while internet standards let individual decentralized admins hook their sites together.
Did any of the ITU standards win? In the end, internet swallowed telephones and everything is now VOIP. I think the last of the X standards left is X509?
https://jacobfilipp.com/MSJ/1993-vol8/qawindows.pdf
By 1995, the “Internet” e-mail address was the only remaining one.
Immutability is one of the best things about email.
Reminds me of IPv6. ;)
That would have required a lot of changes to computing history beyond simply email, and I doubt many of them would have been improvements.
I doubt it. USPS charges everyone to send snail mail, and I get plenty of spam in my mailbox. I end up with way more spam in my snail mailbox than in my email inbox, since the latter has filtering.
Afraid the spammers will always be with us.
Thanks to email security scanners this feature is largely broken.
And so are single click to unsubscribe links. So much so that we have to put our unsubscribe page behind a captcha.
rant over
Sounds like a really fast way to kill a network instead of grow it into a 4B daily active user staple like email is today. You'd basically ensure that email would ONLY be spam, because marketers would be the only ones willing spend money to reach people.
Every time I see someone suggest micropayments on HN I have to wonder if people here have any understanding of how actual humans are. Turning every action on your network into a purchase decision is a good way to ensure nobody ever does anything on your network and thus it never becomes a network.
Humans will always gravitate toward the lowest friction way to achieve their goals. So immediately some private company would introduce a free communication channel as a loss leader instead, theirs would grow faster, and then they'd monetize via ads once their network reached critical mass (see also, whatsapp). Killing the more egalitarian decentralized protocol in the process.
One person's feature is another's anti-feature. I'm glad it's dead.
For example from 2023: X.1095: Entity authentication service for pet animals using telebiometrics
"A complex system that works is invariably found to have evolved from a simple system that worked."
https://lawsofsoftwareengineering.com/laws/galls-law/
In my naive youth I always thought top-down design was the sensible way to build systems. But after witnessing so many of them fail miserably, I now agree with Gall.
Yes, it is a pain to manage. Yes, it is all still mostly running on 20+-year-old hardware and software.
It is slightly ironic that the main way we communicate X.400 addresses between parties is through modern email.
SMTP handled routing by piggybacking on DNS. When an email arrives the SMTP server looks at the domain part of the address, does a query, and then attempts transfer it to the results of that query.
Very simple. And, it turns out, immensely scalable.
You don't need to maintain any routing information unless you're overriding DNS for some reason - perhaps an internal secure mail transfer method between companies that are close partners, or are in a merger process.
By contrast X.400 requires your mail infrastructure to have defined routes for other organisations. No route? No transfer.
I remember setting up X.400 connectors for both Lotus Notes/Domino and for Microsoft Exchange in the mid to late 90s, but I didn't do it very often - because SMTP took over incredibly quickly.
An X.400 infrastructure would gain new routes slowly and methodically. That was a barrier to expanding the use of email.
Often X.400 was just a temporary patch during a mail migration - you'd create an artificial split in the X.400 infrastructure between the two mail systems, with the old product on one side and the new target platform on the other. That would allow you to route mails within the same organisation whilst you were in the migration period. You got rid of that the very moment your last mailbox was moved, as it was often a fragile thing...
The only thing worse than X.400 for email was the "workgroup" level of mail servers like MS Mail/cc:Mail. If I recall correctly they could sometimes be set up so your email address was effectively a list of hops on the route. This was because there was no centralised infrastructure to speak of - every mail server was just its own little island. It might have connections to other mail servers, but there was no overarching directory or configuration infrastructure shared by all servers.
If that was the case then your email address would be "johnsmith @ hop1 @ hop2 @ hop3" on one mail server, but for someone on the mail server at hop1 your email address would be "johnsmith @ hop2 @ hop3", and so on. It was an absolute nightmare for big companies, and one of the many reasons that those products were killed off in favour of their bigger siblings.
A two-or-more order-of-magnitude reduction in a problem seems like a good start and a worthwhile step, not something to disregard because it's not 100%…
The other difference from that era, and even the early internet era, is how much is no longer standardised at all, but decided by global monopolies. Back then it was a given that Everything would at least need to interoperate at the national level. But we may be returning to that .
Maybe X.500 - also known as LDAP, and widely deployed across enterprises in the form of Active Directory.
Funnily enough, if collusion is prohibited, the goal of such a law would be more competition, but the result is more mergers and monopolies, up until the point where antitrust kicks in and ad-hoc limits the monopoly, so each industry ends up with 1 bidder, or 2-3 tops
Besides LDAP and X.509, you've got old standards that were very successful for a while. I'm perhaps a little bit too young for this, but I vaguely remember X.25 practically dominated large-scale networking, and for a while inter-network TCP/IP was often run over X.25. X.25 eventually disappeared because it was replaced by newer technology, but it didn't lose to any contemporary standard.
And if you're looking for new technology, CTAP (X.1278) is a part of the WebAuthn standard, which does seem to be winning.
I'm pretty sure there are other X-standards common in the telco industry, but even if we just look at the software industry, some ITU-T standards won out. This is not to say they weren't complex or that we didn't have simpler alternatives, but sometimes the complex standards does win out. The "worse is better" story is not always true.
The OP article is definitely wrong about this:
> “Of all the things OSI has produced, one could point to X.400 as being the most successful,
There are many OSI standards that are more successful than X.400, by the seer virtue of X.400 being an objective failure. But even putting that aside, there are X-family standards that are truly successful and ubiquitous.X.500 and X.509 are strong contenders, but the real winner is ASN.1 (the X.680/690 family, originally X.208/X.209).
ASN.1 is everywhere: It's obviously present in other ITU-T based standards like LDAP, X.509, CTAP and X.400, but it's been widely adopted outside of ITU-T in the cryptography world. PKCS standards (used for RSA, DSA, ECDSA, DH and ECDH key storage and signatures), Kerberos, S/MIME, TLS. It's also common in some common non-cryptographic protocols like SNMP and EMV (chip and pin and contactless payment for credit cards). Even if your using JOSE or COSE or SSH (which are not based on ASN.1), ASN.1-based PKCS standards are often still used for storing the keys. And this is completely ignoring all the telco standards. ASN.1 is everywhere.
My primary goal is not to send e-mail for free -- my primary goal is to have reliable, low-overhead communication with humans. Having this sponsored by spammers is a fine start, but even if I paid a dollar a year or so, that would be much lower overhead than even a day's worth of looking through spam is today (at the rate I value my time -- but even if you value your time orders of magnitudes less, the payoff is there).
I see that Wikipedia claims that "X.400 is quite widely implemented[citation needed], especially for EDI services", and that might once have been the case - but I doubt it was particularly widespread even at the time that article was first written. It's worth noting that that [citation needed] tag dates from October 2008!
In the early 90s I implemented a gateway between Novell email and X.400. What amused me the most was X.400 specified an exclusive enumerated list of reasons why email couldn't be delivered, including "recipient is dead". At the X.400 protocol level this was a binary number. SMTP uses a 3 digit number for general category, followed by a free form line of text. Many other Internet standards including HTTP use the same pattern.
It was already obvious at the time that the X.400 field was insufficient, yet also impractical for mail administrators to ensure was complete and correct.
That was the underlying problem with the X.400 and similar where they covered everything in advance as part of the spec, while Internet standards were more pragmatic.
Who can forget addresses like "utzoo!watmath!clyde!burl!ulysses!allegra!mit-eddie!rms@mit-prep"
Of course, the biggest--and weirdest--success of the ITU standards is that the OSI model is still frequently the way networking stacks are described in educational materials, despite the fact that it bears no relation to how any of the networking stack was developed or is used. If you really dig into how the OSI model is supposed to work, one of the layers described only matters for teletypes--which were are a dying, if not dead, technology when the model was developed in the first place.
Have you ever tried to implement an ITU standard from just reading the specs? It's hard. Firstly you have to spend a lot of money just to buy the specs. Then you find the spec is written by somebody who has a proprietary product, and is tiptoeing along a line that reveals enough information to keep the standards body happy (ie, has enough info to make it worthwhile to purchase the specification), and not revealing the secret sauce in their implementation.
I've done it, and it's an absolute nightmare. The IETF RFCs are a breath of fresh air in comparison. Not only can you read the source, there are example implementations!
And if you think that didn't lead to a better outcome, you're kidding yourself. The ITU process naturally leads to a small number of large engineering orgs publishing just enough information so they can interoperate, while keeping enough hidden so the investment discourages the rise of smaller competitors. The result is, even now I can (and do) run my own email server. If the overly complicated bureaucratic ITU standards had won the day, I'm sure email would have been run by a small number of CompuServe like rent seeking parasites for decades.
Anyone remember the promise of ATM networking in the 90's? It was telecom grade networking which used circuit switched networking that would handle voice, video and data down one pipe. Instead of carelessly flinging packets into the ether like an savage, you had a deterministic network of pipes. You called a computer as if it were a telephone (or maybe that was Datakit?) and ATM handed the user a byte stream like TCP. Imagine never needing an IP stack or setting traffic priority because the network already handles the QoS. Was it simple to deploy? No. Was it cheap? Nooohooohooohooo. Was Ethernet any of those? YES AND YES. ATM was superior but lost to the simpler and cheaper Ethernet which was pretty crappy in its early days (thinnet, thicknet, terminators, vampire taps, AUI, etc.) but good enough.
The funny part is this has the unintended consequences of needing to reinvent the wheel once you get to the point where you need telecom sized/like infrastructure. Ethernet had to adapt to deterministic real-time needs so various hacks and standards have been developed to paper over these deficiencies which is what TSN is - reinventing ATM's determinism. In addition we also now have OTN, yet another protocol to further paper over the various other protocols to mux everything down a big fat pipe to the other end which allows Ethernet (and IP/ATM/etc) to ride deterministically between data-centers.
1. smtp predates dns. or really even most of the internet. It was originally designed to work over uucp.
2. early smtp used bang paths (remember those) where the route or partial route is baked into the path.
On the ITU side, they have made improvements including allowing a plain fully qualified domain name as the subject of a certificate, as an alternative to sequence of set of attributes.
However, I think DER is good (and is better than BER, PER, etc in my opinion). (I did make up a variant with a few additional types, though.)
OID is also a good idea, although I had thought they should add another arc for being based on various kind of other identifiers (telephone numbers, domain names, etc) together with a date for which that identifier is valid (to avoid issues with reassigned identifiers) as well as possibility of automatic delegation for some types (so that e.g. if you register an account on another system then you can get a free OID from it too; there is a bit of difficulty in some cases but it might be possible). (I have written a file about how to do this, although I did not publish it yet.)
For Linux heads, it was doubly annoying, as ATM was not directly supported in the kernel. You had to download a separate patch to compile the necessary modules, then install and run three separate system daemons, all with the correct arguments for our network, just to get a working network device. And of course you had to download all the necessary packages with another computer, since you couldn’t get online yet. This was the early 2000s, so WiFi was not really common yet.
Even once you got online, one of the admins would randomly crash every so often and you’d have to restart to get back online. It was such a pain.
Now I'm young enough not to have seen teletypes in an actual production use setting, but I've never heard anyone suggesting the presentation layer was for teletypes. That's just Google-level FUD.
X.500 was stripped down to form LDAP
No, LDAP was a student project from UMich that somehow gained mindshare because (a) it wasn't ISO, and (b) it cleverly had an 'L' in front of it. It's now more complex and heavyweight than the original DAP, but people think it isn't because of that original clever bit of marketing.I don't think that's IETF policy. Individual IETF working groups decide whether to request publication of an RFC, and the availability of open source implementations is a strong argument in favour of publication, but not a hard requirement.
If the IETF standards are sometimes useful, it's more a matter of culture than of policy.
ATM was nifty if you had a requirement of establishing voice-style, i.e. billable, connections. No thanks. It was an interesting technology but hopelessly hobbled by the desire to emulate a voice call that fit into a standard invoice line.
Without being able to get too into the telco detail, I think the lesson was that hard realtime is both much harder to achieve and not actually needed. People will happily chat over nondeterministic Zoom and Discord.
It's both psychological and slightly paradoxical. Once you let go of saying "the system MUST GUARANTEE this property", you get a much cheaper, better, more versatile and higher bandwidth system that ends up meeting the property anyway.
There's likely an element of the "layering TCP on TCP" problem going on, too.
The classic popular treatment of the subject is: https://www.wired.com/1996/10/atm-3/
Once those requirements dropped down (partially because people just started to accept weird echo) the replacement became MPLS and whatever you can send IP over where Ethernet sometimes shows as package around the IP frame but has little relation to Ethernet otherwise.
I worked on a network that used RSVP ( https://en.wikipedia.org/wiki/Resource_Reservation_Protocol ) to emulate the old circuit-switched topology. It was kinda amazing to see how it could carve guaranteed-bandwidth paths through the network fabric.
Of course, it also never really worked with dynamic routing and brought in tons of complexity with stuck states. In our network, it eventually was just removed entirely in favor of 1gbit links with VLANs for priority/normal traffic.
We just wanted our own stuff. We did not want to coordinate with a proprietary vendor to network or be charged by the byte to do so.
It was complete garbage.
Another lab of theirs proudly made a Winsock that would use ATM SVCs instead of TCP and proudly made a brochure extolling their achievement "Web protocol without having to use TCP". Because clearly it was TCP hindering adoption of the Web /s
The Bellhead vs. Nethead was a real thing back then. To paraphrase an old saying about IBM, Telcos think if they piss on something, it improves the flavor.
One of the jobs I had applied out of college was to lead Schengen's central police database (think stolen car reports, arrest warrants etc) which would federate national databases. For some unfathomable reason, they chose X.400 as messaging bus for that replication, and endured massive delays and cost overruns for that reason. I guess I dodged a bullet by not going there.
I love this. Ethernet is such shit. What do you mean the only way to handle a high speed to lower speed link transition is to just drop a bunch of packets? Or sending PAUSE frames which works so poorly everyone disables flow control.
That approach of course didn’t age well when voice almost became a niche application.
UPDATE: some say that's because XMPP was too encompassing of a standard (if a format allows to do too much it loses usefulness, like saying that binary files format can store anything). IMO that's not the reason, they could just support they own subset. They scrapped interoperability for competition only IMO.
however, actually building a functional routing infrastructure that supported QOS was pretty intractable. that was one of several nails in ATMs coffin (I worked a little on the PNNI routing proposal).
edit: I should have admitted that yes, loss does have a relationship to queue depth, but that doesn't result in infinite queues here. it does mean that we have to know the link delay and the target bandwidth and have per-flow queue accounting, which isn't a whole lot better really. some work was done with statistical queue methods that had simpler hardware controllers - but the whole thing was indeed a mess.
When I pointed out in a previous post how much X.400 sucked, even that never got anywhere near X.25. X.25 is the absolute zero on any networking scale, the scale starts with X.25 at -273degC and goes up from there.
I think standards are important, and I'm sad that no one bothers anymore, but stuff like this and the inclusion of interlace in digital video for that little 3 year window when it might have mattered does really sour one on the process.
What you need is more that enough bandwidth.
Think of the difference between a highway with few cars versus a highway filled to the brim with cars. In the latter case traffic slows to a crawl even for ambulances.
It seems like it was just cheaper and easier to build more bandwidth than it was to add traffic priority handling to internet connectivity.
Seeing that the tech would never be good enough, they sold off the whole thing for cheap. Years later, they bought it back for way, way more money because they desperately needed to get into the cell phone business that was clearly headed to the moon.
I totally understand the pride they had in the reliability of their system, but it turns out that dropped calls just aren't that big of a deal when you can quickly redial and reconnect.
Well, not "happily". (Doesn't every video conference do the "hold on, can you hear me? I have wifi issues" dance every other day?) But it works on a good day.
And this sort of checks out, most of the complaints about the internet architecture is when someone starts putting put smart middle boxes in a load bearing capacity and now it becomes hard to deploy new edge devices.
That was a selling point, because "hey we guarantee this circuit" but it was also very expensive and labor intensive
Where just dumping your bits into the internet and letting the network figure it out outsourced a lot of that complexity to every hop along the network you didn't own. But, because they care about their networks everyone would (in theory) make sure each hop was healthy, so you didn't need to hand hold your circuit or route completely end to end
BTW, I searched Kagi for "tolerable latency without echo cancellation in France" and saw your comment. Wow. I didn't realize web crawlers were that current these days.
In my club when there is a virtual club meeting however, where people don't have frequent video meetings there is always somebody with trouble ... often the same.
Those old phones had a long range. It was hard to make small ones because the old AT&T towers were much farther apart, up to 40km. Meanwhile, their competitors focused on smaller coverage areas (e.g. 2km or less for PCS) and better tech (CDMA), and it seemed to pay off.
To handle a speed transition without dropping packets, the switch or router at the congestion point needs to be able to buffer the whole receive window. It can hold the packets and then dribble them out over the lower speed link. The server won’t send more packets until the client consumes the window and sends an ACK.
But in practice the receive window for an Internet scale link (say 1 gigabit at 20 ms latency) is several megabytes. If the receive window was smaller than that, the server would spend too much time waiting for ACKs to be able to saturate the link. It’s impractical to have several MB of buffer in front of every speed transition.
Instead what happens is that some switch or router buffer will overflow and drop packets. The packet loss will cause the receive window, and transfer rate, to collapse. The server will then send packets with a small window so it goes through. Then the window will slowly grow until there’s packet loss again. Rinse and repeat. That’s what causes the saw-tooth pattern you see on the linked page.
We were expecting to get some sort of unbelievably fast internet experience, but it was awful as the internet gateway was 1 Gb or something similar.
If the history of email had gone somewhat differently, the last email you sent could have been rescinded or superseded by a newer version when you accidentally wrote the wrong thing. It could have been scheduled to arrive an hour from now. It could have auto-destructed if not read by midnight.
You would never have needed to type “as per my previous message.” Instead, you could have linked emails together into a personal Wikipedia of correspondence. You could have messaged an entire organization or department, with your email app ensuring the message was deliverable before it left your outbox.
And you could have attached files and written a multilingual message with letters beyond ASCII’s 128 characters, eight years before those features came to internet email. You could have been notified when the message was read a full 15 years before email had something similar tacked on. Encryption would have been baked in from the start, rather than waiting for PGP, S/MIME, and TLS to add them later.
All that, and more, was standardized in the 1984 spec for X.400 as Interpersonal Messaging. It was everything we call email today, and then some.
“We had a better system back in the day: X.400,” as one commentator reminisced. SMTP, the Simple Mail Transfer Protocol that became the standard behind how modern email is sent, “didn’t win because it was ‘better,’” he argued, but “just because it was easier to implement. Like a car with no brakes or seatbelts.”
“Of all the things OSI has produced, one could point to X.400 as being the most successful,” agreed Marshall T. Rose, a developer who helped bridge the differences between X.400 and SMTP email. Differences like X.400 email addresses with bang path-esque addresses like C=no; ADMD=; PRMD=uninett; O=uninett; S=alvestrand; G=harald while SMTP email addresses looked like Harald.Alvestrand@uninett.no.
“On the other hand,” he concluded, “that’s kind of like saying that World War II was the successful conclusion of the Great Depression.”

Exchange Server was, in part, built on X.400 standards, and connected to X.400 for years after the standard had faded from popularity
Six months before Neil Armstrong stepped on the moon, the United States Department of Defense started building ARPANET, a network to link computers around the country, budgeted from money redirected from missile defense.
It was on that network that email as we know it was invented. Ray Tomlinson pulled file transfer software, the ARPANET network, and the @ symbol together, and in 1971 email was born. Soon enough it was taking up more than 3/4th of all ARPANET traffic. “Here was this fantastic infrastructure built at government expense for serious purposes — and these geeks were using it for sending messages to one another,” as John Naughton put it in his Brief History of the Future.
Email—or at least the idea of email—took the world by storm. CompuServe offered electronic mail to businesses in 1978 and to consumers a year later, with numeric IDs to message anyone else on their network. Or you could subscribe to The Source (launched in ’78) or MCI Mail (as of ’83) or AppleLink (fashionably late in ’86, then to power the first email to space in ’91).
Telecoms and governments joined the rush. By 1982, British Telecom launched their Telecom Gold email solution, and USPS, in a $40 million misstep, tried to monopolize email on paper with E-COM. “Two-thirds or more of the mailstream could be handled electronically,” assumed Congress a mere eleven years after Tomlinson sent the first email.
Yet the majority of those emails were messages inside walled gardens. You could email anyone you wanted, as long as they, too, used the same service. Even email’s original home was a mess. “By 1977, the Arpanet employed several informal standards for the text messages (mail) sent among its host computers,” stated RFC 822, an attempt in 1982 to standardize email. Someone had to make electronic messages speak the same language.
In stepped the United Nations. “The establishment in various countries of telematic services and computer-based store-and forward message services in association with public data networks creates a need to produce standards to facilitate international message exchange between subscribers to such services,” opened the document that aimed to standardize email, three layers of bureaucracy removed from the Secretary-General, and for a moment email could have been an international standard.

One of the simpler diagrams from the X.400 standard
Email should be clear and concise, says the United Nations today, decades removed from the medium’s chaotic early years. Focused on a single topic, with short, meaningful sentences free from jargon. It should be positive, civil, and formal when appropriate, advises the self-described universal global organization.
Under its auspices—via the International Telecommunication Union’s Consultative Committee for International Telephony and Telegraphy committee and the UNESCO-linked International Federation for Information Processing—email was almost standardized in October, 1984 under the X.400 spec that was anything other than concise and jargon-free.
“This Recommendation is one of a series of Recommendations and describes the system model and service elements of the message handling system (MHS),” started the Data Communication Networks Message Handling Systems document that spelled out the X.400 spec, drafted by a committee chaired by Canadian Department of Communications senior advisor V. C. MacDonald and filled with national telecom representatives. “The MHS model uses the techniques of the OSI Reference Model to formally define the layered communication structure used between the model’s functional components.” And so on and so forth, for 266 pages. It took six pages to describe how to address messages without once showing a complete email address (and perhaps that was for the best, since X.400 addresses were varied enough that RFC 1506 identified six common ways to format them).
It was convoluted, over-described, and under-specified, right when email most needed simplification. And it was late.
Two years earlier, the Simple Mail Transfer Protocol had been spelled out in 68 short pages. “The objective of Simple Mail Transfer Protocol (SMTP) is to transfer mail reliably and efficiently,” wrote University of Southern California research scientist Jon Postel in RFC 821 about the system that built on the ARPANET’s original email protocols and the earlier Mail Transfer Protocol. “The SMTP design is based on the following model of communication: as the result of a user mail request, the sender-SMTP establishes a two-way transmission channel to a receiver-SMTP.” Its email addresses used a refreshingly simple user@domain format. Its syntax spelled out exactly how a simple email should work, and little more.
Very quickly, the community effort won out over the committee.
“Using the X.400 recommendations themselves is practically impossible in most cases, since just learning to read them takes a fair effort which can be expended only by specialists,” opened Cemil Betanov’s Introduction to X.400 book, published in 1993. “X.400 was conceived as a tool, rather than a product.”
X.400’s spec prescribed outcomes, that software shall do this and this shall happen as a result. SMTP typically instead described exactly how things should work.
Sending a message, for instance, is described in X.400 as follows, with a description of the desired outcome (envelopes, in X.400, generally stood for what today we’d think of as an email message with headers):
The submission interaction is the means by which an originating UA transfers to an MTA the content of a message plus the submission envelope. The submission envelope contains the information the MTS requires to provide the requested service elements.
SMTP, on the other hand, describes sending an email with specific command names and interaction steps:
There are three steps to SMTP mail transactions. The transaction is started with a MAIL command which gives the sender identification. A series of one or more RCPT commands follows giving the receiver information. Then a DATA command gives the mail data. And finally, the end of mail data indicator confirms the transaction.
There were reasons for the complex verbiage. X.400 was imagined as an ideological framework that telecoms and software vendors could each implement in their own way. The ugly addressing? It “provides solutions to certain problems and is ugly for good reason,” Betanov explains. “Make it less ugly, and it immediately loses functionality. Thus, the solution is not to make addressing nicer, but to hide it from the user,” something both internet email and X.400-powered software could easily do with headers, not so much with addresses.
Users liked the ideas in X.400, liked the potential of interoperability and richer email. Businesses and governments alike found its security features alluring, with authenticated message origins, body part encryption to keep privileged data from prying eyes, and classification labels. User demand led businesses to deploy it. By 1989, X.400 was supported by “22 E-Mail software vendors,” including software names like CC Mail and Lotus, computer makers like DEC, and telecoms like AT&T. “X.400 had interconnected one million mailboxes on many networks by 1994,” wrote Dorian Rutter in a thesis on British networks (paling beside the estimated 25 million internet users that same year).
But they were equally taken aback by the difficulty of using it, and by implementations that fell short.
X.400 was “top-down,” MIME author Nathaniel Borenstein relayed on a call. “That's the way the telecoms did things. They would set out requirements, and their teams of people who wrote the specifications would fulfil those requirements.”
It was easy enough, in theory, for AT&T or British Telecom to implement the standard they helped create. “Because they had total control over the architecture, they could do that a lot more than you can in today's world.” So it was possible, say, for one implementation of X.400 to offer X.400 features like recalling a message, in theory at least, when such guarantees would fail as soon as messages left their walled garden. But “they couldn't buck the rules of physics,” Borenstein concluded. Once a message reached another server, the X.400 implementations could say that an email was recalled or permanently deleted, but there was no way to prove that it hadn’t been backed up surreptitiously.
And thus X.400’s original mission of interoperability was doomed to failure, regardless of how far original X.400 implementations spread.
Despite the standard, Rutter’s thesis found, “most e-mail users remained isolated from each other. X.400 had therefore failed to fulfil the promise set for it by its proponents.” Another case study into why X.400 failed reached a similar conclusion: “Even early implementations of the incomplete initial X.400 version were frequently incompatible,” wrote Kai Jakobs. “It was next to impossible to exchange messages between systems from different vendors.”
Inside the X.400 ecosystem itself, the complexities added up to an unworkable system. As Tom Fitzgerald parodied it: “X.400: So secure that an X.400 mailer won't even talk to another X.400 mailer from a different vendor.” “I have several accounts that could be reached by X.400, each of which could be used in a different way, depending on what system you come from,” recounted Jim Carroll, co-author of the Canadian Internet Handbook. “You might reach me as c:us,a:mcimail;f:jim;s:carroll; on one system, or you might reach me using the method mhs!c=us/ad=mcimail/pn=jim_carroll, while on yet a third system you might send to me using the form [jim_carroll/jacarrollconsulting] mcimail/usa.”
“People pay me to help them figure out how to use X.400. They pay me!,” Carroll marveled. “Isn't there something wrong with this picture—an addressing standard that is so complicated that you have to hire a consultant to figure it out?”
Telecoms standardized X.400. Governments, from the US’s GOSIP to the EU’s procurement rules, mandated it. Developers either rued its complexity or raved over its potential.
Meanwhile the simple mail transfer protocol spread like wildfire, and by 1993 even the United Nations acquiesced to sending email over both X.400 and the internet.
“I worked for a company that ran X.400 commercially, before the Internet really got going,” shared Chris Marshall, a former Dialcom employee. “It did, indeed, have many things that we wish email had, these days, like true read receipt and routing management. But it was a complex beast, and that is why it lost out to simple SMTP and POP.”
“X.400 is dead,” Carroll surmised, “because it isn't as simple as the telephone, fax, and Internet e-mail.”

Aeronautical software Lunar AMHS, with X.400-style addresses while submitting a flight plan
You don’t email with X.400 today. That is, unless you work in aviation, where AMHS communications for sharing flight plans and more are still based on X.400 standards (which enables, among other things, prioritizing messages and sending them to the tower at an airport instead of a specific individual). It’s used, sparingly today, in militaries, governments, and banking—and previously powered parts of the SWIFT standard for transferring money.
And if you use Microsoft Outlook with a Microsoft Exchange Server, you might recognize some similarities with X.400 (and its related X.500 standard for directories, “the one part of OSI that actually won,” as Borenstein put it). Exchange included built-in authentication, long before SPF, DKIM, and DMARC were possible, and its delivery reports are still more detailed than their SMTP counterparts. “The entire data model of MAPI is based on [X.400],” said @p_l in a Hacker News comment, “shared between Outlook and Exchange, with somewhat lossy translation when it has to go outside of X.400-over-RPC that MAPI provides.”
Internet email—the SMTP stack we’d come to just call email—gained enough features over the years to nearly reach parity with X.400. It moved fast, far faster than X.400. The original idea that turned into X.400 started with a working group convened in 1978; it took 6 years to get the first standard, and 4 more years to update it. In that same timeframe, 339 RFCs were published, including the nine core email-focused RFCs. And email’s changes were implemented in a way that let every email system do its own thing, maintaining uniqueness and compatibility at the same time.
MIME, the standard that among other things added multi-language support and attachments to SMTP email, started around existing email systems. “Let us assume that we have an existing electronic mail infrastructure. And now we are going to figure out the minimalist set of changes which we can add on top of that,” Marshall Rose described MIME’s approach. X.400, by contrast, had “this kind of blanket assumption that someday everything will be X.400 and we won’t have to worry about existing mail systems.”
Email’s a messy, living standard, one that’s survived this long in part thanks to the simplicity embodied in SMTP’s name. It looked too simple at first, almost emblematic of venture capitalist Chris Dixon’s postulation that “The next big thing will start out looking like a toy.”
A simple mail transfer protocol it was, but it did just enough to get email systems talking to each other. It specified just enough to make diverse implementations compatible. And it was rapidly iterated on enough that by the time X.400 systems were ready for use, people were using SMTP-powered email to talk about it.
And that was enough to relegate X.400 to the inspiration pile, and for SMTP to outlive X.400 as what we’d know today as email.
Image credits
| Image | Credit |
|---|---|
| Header photo from the ITU | International Telecommunication Union |
| X.400 in Exchange Server | Network Encyclopedia |
| Lunar AMHS Terminal | Galadrium |