Too bad because it's an interesting question that I would also like to know the answer to.
Servers ranged from 144GB ram to 3TB ram and that memory is heavily utilized. On servers meant to be stateless app and web servers panic was set to 2 to reboot on oom which mostly occurred in the performance team that were constantly load testing hardware and apps and a few dev machines were developers were not sharing nicely. Engineered correctly OOM will be very rare and this only gets better with time as applications have more controls over memory allocation and other tools like namespaces/cgroups. Java will always leak, just leave more room for it.
I install more RAM so I can swap less. If I have 8 GB, then the 2x rule means I should have a 16 GB swap file, giving me 24 GB of total memory to work with. If I then stumble upon a good deal on RAM and upgrade to 32 GB, then if I never had memory problems with 24 GB, then I should be able to completely disable paging and not have a problem. But instead, the advice would be to increase my paging file to 64 GB!?
It doesn't make any sense. At all.
>Question: Why do you need 500MB of swap space? You would be better of
>spending your money on more RAM than wasting it on so much swap space,
>considering that it would most likely never be used anyways.
I work with systems that have between 256MB and 1GB of RAM and
between 4GB and 16GB available for Linux. My experience with other
operating systems is that swap should be 2X to 3X RAM
...
The info that I have read about Linux is that the 2x for swap space is
only for those running less than 16mb of ram. Your swap space could be
equal to your ram
...
I know there are broken OSes out there where it's recomended to
have 2x RAM swapspace, but Linux is not broken in that way.
With Linux you should have <Max needed memory> - <RAM> swapspace,
and depending on your needs that might range from 0 to infinity
MBs of swap.
...
THIS IS CRAZY!!!! YOU DON'T KNOW WHAT THE F--K YOU'RE TALKING ABOUT.
It goes downhill from there..https://groups.google.com/g/alt.os.linux.slackware/c/hWy0h_S...
When a process forks, the child needed swap reservations for the parent's entire address space (before exec replaces it). A large process forking temporarily needs double its swap allocation. If your working set is roughly equal to physical RAM, fork alone gets you to 2x.
This was the practical bottleneck people actually hit. Your system had enough RAM, swap wasn't full, but fork() failed because there wasn't enough contiguous swap to reserve. 2x was the number that made fork() stop failing on a reasonably loaded system.
The later overcommit/copy-on-write changes made this less relevant, but the rule of thumb outlived the technical reason. Most people repeating "2x RAM" today are running systems where anonymous pages aren't swap-backed until actually paged out.
Today swap is no longer about extending your address space, it's about giving the kernel room to page out cold anonymous pages so that RAM can be used for disk cache.
A little swap makes the system faster even when you're nowhere near running out of memory, because the kernel can evict pages it hasn't touched in hours and use that RAM for hot file data instead.
The exception is hibernation — you need swap >= RAM for that, which is why Ubuntu's recommendations are higher than RedHat's 20% of RAM.
The ship's long sailed though, so even I run with overcommit enabled and only grumble about what might have been.
Sure, you're still better off with 24 GB overall compared to 8GB+swap whether you add swap to your 24 GB or not, but swap can still make things more better.
(That says nothing about whether the 2x rule is still useful though, I have no idea.)
Honestly, I think overcommit is a good thing. If you want to give a process an isolated address space, then you have to allow that process to lay out memory as it sees fit, without having to worry too much about what else happens to be on the system. If you immediately "charge" the process for this, you will end up nit-picking every process on the system, even though with overcommit you would have been fine.
Today it is often weird. My laptop has only 256GB SSD hard drive with 32GB RAM. Following this, a fourth of my disk would be swap.
At the time, case was different. In 1997, a machine with 8 MB physical RAM and 320 MB disk was typical. That was only 1/20 of the disk.
The difference of this ratio is because RAM became very big recently, being needed by the latest Windows. But disks did not grow so much, because everything was "in cloud". Beside that, everyone switched to SSD because the NTFS / Windows use cases were always suboptimal. But SSD in the same size is still more costly than HDD; although the difference is ceasing because the SSD gets much more development investment.
The reason of the 2x was swap fragmentation. Those disks were yet HDDs, with a seek time. Thus, it was essentially important to write everything in them in consecutive chunks.
That was no goal to fill it; the goal was that the system always has a place to find a consecutive empty block, to write into swap a consecutive memory range.
The guys essentially not understanding how system paging and block cache works existed even at the time. I have even heard university sysadmins saying, that "we can solve the memory need of our local server without forced compromises, like swap". I understood only decades later, that he was not an expert knowing something what I do not, but he did not know even what I already knew (at the time). Many people had the very stupid concept of that "swap is slow so I turn it off". But it happened only among "Linuxers" (incl. FreeBSD / Solaris etc) guys, because the Windows guys mostly did not know what is swap at all. They also did not know how to turn it off.
My impression is also that the software behaved a bit differently. At the time, if you had 8 MB RAM, your processes used, for example, 14 MB, thus you had a minimal block cache and used 8 MB swap; that was slow but fine. Today it would be an nearly unusable system. My impression is that it is because today the processes are more happy to regularly touch all their memory pages, particularly the VMs/interpreters (like a JVM garbage collection regularly walks over the whole heap of a process).
The swap == 2x of your physical ram came from that you need to be able to swap out all your physical RAM, but you need to do it in a way that you can do it always into contiguous empty disk ranges.
To your extension: FreeBSD was special, early FreeBSD had no paging only segmentation. As far I remember, it could swap out only whole processes (or segments). Note, same is for the win3x. Obviously the need to contiguous block allocation was far more important in this case, although the x2 remained.
First, it was never a rigid rule. It was always a rule of thumb. Although in the practice, it was very often used as a rigid rule. The reason is very simply: we had no better idea.
Now the question is, why was that this rigid rule. Then imagine, what happens in a swap space. Randomly coming random page ranges to swap in, and swap out. Your primary interest is to allocate always contiguous block ranges for them on the disk. Disk is cheap (by size), memory prices are astronomical.
Sometimes it happens that a factory burns in Taiwan, and in the next 2 years, the new computers have half the ram on the same price.
Also disk is costly, but the seriousness of the situation is nowhere to the RAM.
Back to the 2x. It is a rule of thumb and the reason behind it is partly psychological and partly technological, but it is a partly psychological decision of the guys who really understand, what is going on in their machine.
Now imagine, play it in your mind, what a swap algorithm is doing. You have a queue of tasks, write out ranges, read back ranges. They are coming randomly (for you). Play this story in your mind. I would happily build a video for that but I want you that you are capable to imagine it.
Then say something, what is the overcommit what you would say for the disk usage. You do nothing about the user, but your little brother comes and asks, what should he say to the linux installer about the swap size.
No you won't say 1.5x and not 3x. The background logic, on which you will say the 2x, is that you estimate, in the maximal or close-maximal overload, roughly the same amount of ram should lie on the swap as your physical RAM. That is coming from that roughly that was the case (at the time) where the computers became unusably slow.
Now you want this to be allocatable in contiguous ranges. It is again partly psychological, because anything can happen on the swap partition (swap file). But you know, these are rare. Here the heuristics is that you want roughly the same amount of free swap space as allocated swap space.
Thus, this was the combination of 3 other rule of thumbs:
Pack these two together. You have 8 MB RAM, so you have 16 MB swap.