It helped that the DOS executable format was the same as the CTOS format - because we had traded Bill Gates our linker (which produces executables) for his BASIC compiler.
This NTSync stuff is very impressive, but I haven't seen a lot of end-to-end numbers versus Windows. The last comparisons I saw showed pretty much every distribution on the order of 5-30% behind Windows, varying on the game. And Nvidia GPU support was still not great.
I WANT to swap. Please give me cause to do so. I'm sitting here with my finger on the button waiting for it to finally get good enough to make sense.
Decades ago I ported some games to linux but I do think proton is the correct approach now. One underappreciated advantage is you get most of the mod environment too. In ESO for instance, there is an addon (tamriel trade center) which lets you download item prices, but it requires a windows client exe to do that. That client works on proton.
I also do some modding myself and can cross compile my rust code to windows with cargo xwin, and run it right away in proton, which is fairly amusing to behold.
I actually don't mind windows generally (been a MS user since DOS 5), but Win11 is a game changer, pun intended, and not in a good way.
Fixed in Wine 11.0. Thanks to the Wine team.
Not sure if this was related to NTSYNC, but Wine's locking infrastructure definitely got an overhaul.
https://www.kickstarter.com/projects/944362954/bapaco-the-wo...
Interesting, but I wish it was half the size folded...
What does this mean? System calls?
I have a couple more things to figure, I need XBox authentication to work for Halo Infinite and Sea of Theives, among others, and I need to figure out some solutions for some ancient software I have to run, which will probably end up being a Windows 11 VM. But as for my daily driver OS, I am so excited to get off Windows once and for all.
If Linux was measurably 5% slower on all benchmarks, would that mean you wouldn't do it even if you wanted to? Is every single nanosecond of performance really that important to you? I switched 10 years ago when things were a lot rougher than this, and in the end everything still worked well enough that I never cared to swap back.
> These old workarounds got subtle edge cases wrong in ways that produced occasional hitches, deadlocks, or weird behavior in specific games, which are bugs that don't show up on benchmark charts but can absolutely ruin individual experiences. NTSYNC fixes those at the source by matching Windows behavior exactly, and that means as soon as your favorite distro moves to the new kernel version, whether it be Bazzite, CachyOS, Fedora, or a flavor of Ubuntu, they all get this much-needed fix.
That's the crux of the article. NTSYNC isn't faster, it's more "correct". Most games are around the same level of performance, with certain outliers both ways. Right now there isn't anything performance wise that Linux has to do that would impact all games. Just tweaks and additions to the different layers [1][2][3] in the same way driver vendors do. Much of the poor performance is for API violations and other shenanigans.
1: https://gitlab.freedesktop.org/mesa/mesa/-/blob/main/src/uti...
2: https://github.com/doitsujin/dxvk/blob/master/src/util/confi...
3: https://github.com/HansKristian-Work/vkd3d-proton/blob/maste...
Anecdotally, I find that getting Linux on somewhat older or underpowered hardware is always a massive positive. Better performance as well as battery life. I'm not as familiar with modern hardware's relationship to either OS ("OS vs. some flavor of OS based on a similar or same kernel" - I know) with modern hardware. Worth a shot though!
Every supercomputer seems to do quite well with Linux kernels. Probably good enough for Crysis :)
If you need every last bit of FPS maybe it is lagging, but 5-30% slower is roughly on par at a large sense, it's less than the difference of e.g. one NVidia GPU generation to the next, so it makes it playable.
Tom's Hardware is a bit before my time, but I remember it being well regarded. I've seen a lot of similar articles under that name lately. I wonder if they've undergone similar fates.
Seriously, is it really a victory if you have to adopt the architecture of your sworn enemy?
I can't prove it, but the Steam Deck has probably torn down a lot of barriers for mainstream use among the crowd that care about the game more than the OS. Getting some of the other games (League, Vanguard, Warzone, BF6, etc.) or whatever is popular in those segments onboard might be the critical mass that justifies fixing all the rough edges that get fixed when a big pile of users are represented.
I recently completed Stellar Blade with zero issues.
I don’t even shutdown the machine, I just hit the power to sleep it. Instantly resumes where I left off.
Incredible to see just how far it’s come.
How do I actually see the graph?
All I see is stats for April:
- Windows 93.47% +1.14%
- Linux 4.52% -0.81%
- OSX 2.01% -0.34%
It's true what Max Planck said that science advances one funeral at a time. So does the tech industry as a whole.
AMD is much better. Nvidia has been improving but stuff "just works" with AMD because the kernel (amdgpu) and userspace (RADV) drivers are open source. Valve is a major RADV contributor too.
I don't feel like I'm missing out on anything with my 9070 XT. Performance is great.
Whereas the AMD-based Steam Deck always does what it should do.
It runs super smooth, with the build in 'wayback machine' and 'curated' Arch distro (7.0 zen kernel just dropped a week ago) pretty much bullet proof for beginners or as a daily distro if you want to get stuff done w/o caring much about it - just loving it. On the other hand side you have cutting edge gaming tech like wine/proton staging versions per default, so I'm playing Blizzard games with NTSYNC (the tech from the article) for several months now :) Forgot about most of the flashy default UI though :D
Windows copied futexes from Linux first, anyway.
It is no different from arguing how Linux is getting better GameCube games with Dolphin.
Also Valve is only as good as its current management is still around, eventually like any other company time will pass, and new warm bodies will take other decisions.
https://www.collabora.com/news-and-blog/blog/2020/08/27/usin...
There is the odd decent nugget in there, but it is a shame seeing them fall like this. Unfortunately the same sentiment is true about most news sites now.
- occasionally an online game breaks and it's usually fixed within a day or two. for example at some point a Battle.net update broke the launcher under Wine some time last year, then for a while Overwatch would intermittently crash once every few sessions. I haven't gamed on Windows in years so I can't even compare anecdotally, but I suspect Windows is probably slightly more stable with live service games. I've never had any issues with a single player game, period. (YMMV)
- DX12 performance is 10-20% worse on Nvidia. This should be improved Soon (TM) - I think the last piece is https://github.com/HansKristian-Work/vkd3d-proton/tree/descr...
- Some anticheats block Linux - the only times I've switched over to windows in the last year have been when some friends wanted me to play Marathon with them
- Running 'sidecars' alongside your games or modding works but is generally more of a hassle with wine
things I didn't expect to work but do:
- Game streaming with Sunlight works fine to Samsung TV via the TizenOS Moonlight app
- Nvidia had suspend issues for a year but those have all been sorted out the last few months
I don't know what they could do spanner tossing wise to really screw w/ Linux gaming at this point that wouldn't just drive more frustrated customers off their platform.
https://www.gamingonlinux.com/2026/05/further-expanded-amd-h...
Oh look at that, XDA and HTG are both owned by Valnet:
For the most part the games just work, it's more system issues that I've run into where Linux suspend mode and the audio stack can be a little flaky and required Claude to diagnose and sort out.
To quote Linus Torvalds from 1997: "I don't try to be a threat to Microsoft, mainly because I don't really see MS as competition. Especially not Windows - the goals of Linux and Windows are simply so different."
all that said, they view this as enabling the consumer by supporting their hardware better, they have an antagonist, mafia-like relationship with game developers.
I particularly got fed up with Nvidia on linux playing War Thunder - I had a regular crash that Gaijin and Nvidia each blamed on each other, and I never did get it fixed.
Nvidia driver updates can also leave you stuck with no desktop environment on occasion and while fixable, it's a pain in the rear. However, when the drivers are right, Nvidia performance is second to none.
AMD has drivers built right into the kernel, and as long as you have whichever nonfree firmware repos your distro supports (I use Devuan, a Debian derivative), AMD cards 'just work'. If using xorg, install xserver-xorg-video-amdgpu for modern cards, and xserver-xorg-video-radeon for older cards. I'm currently playing on a Radeon 9070 (non-XT) on a 1440p monitor with plenty of performance. I know that it also works on wayland, but I have no experience there.
Heroic because the amdgpu driver is strangely huge, more code than the rest of the obsd kernel combined, It has something to do with gpu's having no isa stability and the generated code for each card present in the driver.
I reckon a successful launch of the Steam box (or whatever they're calling it) with its enormous library could develop into something that really challenges what's left of Microsoft's piece of the console market (and threaten Sony a little, for that matter) though it's looking like the memory shortage is gonna kneecap that by forcing the price too high. Bad timing.
Me and all my dad friends are all signing up for XBox accounts so our kids can play Minecraft. So IDK about that.
That is, more people being subtly pushed to using display port is not a bad thing.
That's why all the data matters for all of these dimensions; game performance is much more than FPS per watt over time.
When people see "linux gaming is great now, look at the fps" it comes across as potentially disengenuous because of all the other factors that matter and should be tested. Or rather, if a reviewer is talking entirely about framerate, then I just can't trust their opinion and expertise when it comes to the state of Linux gaming.
[1] https://news.ycombinator.com/item?id=47513667 [2] https://lore.kernel.org/lkml/f4cc1a38-1441-62f8-47e4-0c67f5a...
What do you mean? SRWLock (or the older CRITICAL_SECTION) cannot be shared between processes. A (Win32) Mutex does work across processes, but that's its entire purpose. So Windows does have different tools for different jobs.
In fact, it's really the other way round: on Linux, a futex also works across processes, but there is no equivalent in Windows. (Sadly, WaitOnAddress can only be used in a single process.)
What benchmarks are you talking about? CPU-wise the A15 Bionic just barely beats the Ryzen 3700X in single-core and gets absolutely destroyed in multi-core (Geekbench). As for the GPU, the Radeon RX 7600 (closest thing I can find to a "modern console") does >10x the TFLOPS in FP32.
The only reason why they look like they're "in a similar tier" in ported games is because the A15 Bionic is usually tested on 5-6" screens that can be upscaled from 360p without any measurable loss in visual quality, with a massive downgrade in model and texture quality for the same reason. The only modern console the Apple TV "may be" similar to is the Switch 1
Any modern distro running NVidia or AMD should be fine. I've done both. I didn't have to do anything for the NVIDIA 3000 or NVIDIA 4000 series cards but select the nvidia driver. AMD otoh is built into kernel now.
This isn't Linux looking to destroy MS, this is mostly Valve understanding the requirement for an OS that won't be able to become predatory to them and their business model in a single system update.
love linux but the audio situation has always been bad.
If you want statistics, Linux’s gaming market share is 2x that of MacOS.
The barriers to gaming on Linux have never been lower. They’re certainly much lower than the barriers to running windows games on windows were back in the Win 95 - XP SP2 days (when I jumped ship).
I would say custom modding and online multiplayer anti-cheat systems are the last real hold outs, and even then it doesn't affect every game.
My point is, you may find the one or two games holding you back won't be missed much.
https://www.geeksforgeeks.org/operating-systems/traps-and-sy...
That seems hugely useful for interprocess communication and I can immediately think of reasons to use IPC in a game. Having a separate voice process for one.
Didn't help connecting it to my Macbook, but still..
And then monitors released during this time generally do the same too.
Also if you want to use it through a capture card, HDMI ones are way more common and cheaper
> the goals of Linux and Windows are simply so different.
So different that Windows muscle memory works on most main stream Linux UI's, Many (most?) Steam games run on Linux, and now we have Windows in the Linux kernel.
Thanks for sharing, never heard about it before. What was kernel programming back then? Briefly checked the wikipedia and looks like CTOS was kinda big in the government space back in the 80s.
But the issue is that it is many multiples of that, especially on the most common PC gaming hardware (Nvidia GPUs), often more than a 25% difference in framerates. Not so important at 144fps, but very important at a 60fps baseline and for genres like fighting games.
A lot of people don't mind, say, an extra 5 frames of input delay. They don't notice it. But a lot of people do notice even an extra 2 or 3.
I do think that frame pacing issues kinda do have a critical thin threshold where it's either bearable or an unacceptable difference. And the native windows version can often already be riding right on that line. So while it's not fair to the Linux version to demand better, it is unfortunately the case that it might tip over that line.
I kept running into issues that took me time to solve. I understand that the only reason it took me time to solve these issues is because I'm new to it and that people who have been gaming on Linux for years already know how to solve them all. But what would happen was is I would sit down to play a game spend maybe an hour or two fixing issues and then after that I ran out of time to play the game. I kept this up for a couple months but honestly at some point I just gave up. Now I'm playing games on Windows again.
To be clear, I'm a huge proponent of Linux gaming. I just unfortunately am too busy these days to spend the time to get it to work.
Simply no, thank you.
To give you an idea of how bad it is, they slowed console manufacturing to a trickle last year to try and juice their profit margins, and are now stuck in a situation where they can't spin manufacturing back up to cash in on the inevitable rush of demand for hardware when Grand Theft Auto comes out this fall.
HDMI goes 25'+, no problem.
and displayport 2.0, since 2019, has supported all the same variations (hdr10+, dolby vision) that HDMI does
A good example of this is the Final Fantasy Pixel Remasters, which were so lazily ported that most fans advocate for pirating the originals instead. Why should anyone pay $14.99 for the bad version of FFVI?
Don't all USB-C video outputs use DP alt mode too, with an HDMI adapter at the end? And they can do HDR.
And, assuming your are doing x86, you probably already have an EFI partition so even doing motherboard bios updates isn't much of a big deal. You just drop the update in the FAT32 EFI partition, reboot, and point the motherboard at that location. Some motherboards even support just doing that as part of an online update.
I encounter a few games with frame pacing issues, otherwise not present on Windows. Shader compile time is longer than on Windows. Occasional crash in some games, etc.
Windows has issues too. It's not perfect, although they are different issues to Linux.
Ryzen 5800x3D RTX 4080 64GB DDR4 @ 4000 M/T's
The kernel was in Intel ASM86 but the rest of the OS was written in PLM86. When I joined it was 2MB of code on a 128K 8086 cpu. By the time I left it was 9MB of code running on an 80386.
Userland anti cheats can work (and do) on Linux if the developers want to. Most of the third party ones the developer buys/licenses already do.
But reality is that only the kernel level ones seem to work to some extent. Difference in the amount cheating between counter strike and valorant is just massive (both free to play games)
what is the source of this non-determinism?
Unfortunately the install process is always going to be at least a little bit technical. I wish it wasn't, but idk how you'd do that without making the os like an emmu chip that you can swap out, instead of a thing you write on your drive.
That's not a bug, it's a feature.
I have a dumb-ish Samsung Hotel TV / commercial TV at home. It has DP.
I don't want to discourage you, but what's wrong with helping MO2 and Vortex get ported to Linux?
I've long since decided that buying the latest top end hardware is just spending a lot of money to be upset by buggy drivers or not being able to get 5000 fps in a benchmark but has no real gains in how fun games are.
Although, everyone probably says that about whatever distro they happen to use lol.
I know you framed this as a negative, but this is something I yearn for; It's the one of the best games, imo. I often wish I ran into more issues, but for the most part, things _just work_^TM.
Then again - none of the streaming services are streaming at anything remotely close to 100Mbps so I doubt they consider it necessary to upgrade to GbE.
I don't know OG x86 (cuz, ewww) but on 68k this was generally the way. On my Atari ST a syscall was performed by filling your registers and stack as expected, then executing one of the TRAP opcodes and that would get the CPU To save PC etc & jump to the handler but in supervisor mode, where your syscall could then read state perform accordingly, and then return back to you.
I think x86_64 has just formalized this into a specific SYSCALL instruction?
ARM variants call it SVC (supervisor call).
Same difference.
Some older operating systems just implemented their syscalls as ordinary subroutine jumps, though, and everything ran in supervisor etc. I believe AmigaOS was like this, you just went through a jump table. Which, I think, shaves some cycles but also means compromises in terms of building for memory protection, etc.
Traps typically also result from exceptional conditions (like divide by zero or page fault).
An architecture may or may not provide non-trap paths for less-privileged code to invoke more-privileged subsystems (call gates, "syscall" instructions, etc.).
Traps typically need some way to preserve all userspace-accessible registers (otherwise resuming from a page fault is .. hard). Dedicated syscall instructions may only need to restore a subset of registers.
In some implementations, processors may discover that an instruction must trap after it starts irreversibly changing architecturally-visibile state; in cases like that, the processor needs to leave enough breadcrumbs for the OS to allow either a clean unwind or a resumption of the interrupted instruction. My understanding is that the original 68000 somewhat famously got this wrong.
A control panel (or cross-distro YaST) would be very welcome in the ecosystem I think.
My 2020 LG CX has a USB 2.0 port and I get ~300mbps with a gigabit adapter, if the TV you ended up with has a USB port it's worth a try.
That said some Linux distros can do the same now though I've used so many the last few months I don't know which.
https://www.nxp.com/docs/en/reference-manual/M68000PRM.pdf see page 292, also see page 629 for the table of "exception vectors" (addresses for code to handle each specific trap/exception/interrupt)
Most processors support both "interrupts" (an external peripheral is banging on the CPU's interrupt pins... but also invocable from software; software interrupts; SWIs; INT instruction on x86) and "exceptions" (e.g. divide by zero, bus error, illegal instruction). Depending on the processor, accessing the "privileged" mode can be done either by software interrupts, exceptions, or both. An operating system should pick one and stick with it.
Other uses for interrupt/exception/trap vectors include hardware breakpoints: don't try and single-step the CPU, overwrite the code with an illegal instruction and control will flow to the illegal instruction handler where you can see all the registers then execute the real instruction that was meant to be there and return to where you left off. Some CPUs have a formal "BKPT" type instruction for that.
One other use on the 68000 is that any unrecognised instruction that started $Fxxx triggered the F-line handler; all the floating point instructions were in the form $Fxxx, so if you didn't have an FPU, you could put a software emulator for the FPU instructions in the F-line handler and software wouldn't know the difference. Traps/exceptions don't have to be a jump from unprivileged to privileged, they can just be utilitarian.
Later on the 386 Intel added virtual 8086 mode which trapped to the kernel privileged instruction exception also for certain instructions that had to be virtualized, among them INT.
None of them ever seem to have DisplayPort.
In other words, no one is going to refuse to use Linux out of loyalty to Windows, as long as all the games they want to play work.
Heroic is a launcher aggregate/wrapper I think? for 'Epic, GOG and Amazon Prime Games' It's either Steam, native/standalone or arr for me. for non-steam stuff I use umu.
* I should add that I am launching a steam purchased copy of SoT, not the one from Xbox store/gamepass or what have you, so the process is likely different, but maybe not cus you are likely going to see the same auth popup served via wine/proton.
I don't think you'd need to block multi tasking though, but the kernel would need to prevent or tamper root access so it couldn't modify the game memory.
It didn't use to be complicated, but an update messed stuff up a few months ago (halo infinite).
So you have very old hardware, can barely play modern AAA games (if ever), and are still happy. Good for you.
But your opinion is relevant to average gamer who enjoys playing games released in current year in the same way that someone drinking instant coffee can advise on coffee beens that it's all just caffeine in the end.
The stow approach is something that I considered but ultimately rejected for a couple of reasons around handling conflicts of game-installed files as well as how to ultimately handle the symlink lifecycle (eg wrapper to make the "non-running" state always clean or to let it always persist and then need to run manual cleanup/update steps). But if you're interested in that approach when I was applying for Nexus Mods approval I discovered https://github.com/Marc1326/Anvil-Organizer in the overall list of mod tools which I believe uses that strategy (though I haven't really looked too closely)
But basically my original idea to just install the files directly into the game directory stems from the fact that when I switched to linux for gaming and not having success with MO2 that's literally what I was doing. I would download the mod from nexus and unzip/tar it into the game directory manually. When I wanted to uninstall or update I'd find the original archive list the files in it and then delete them from my game directory. After doing this too much I realized that I was basically missing the functionality of a standard linux package manager (eg apt, pacman, etc)
I should add that it's a CLI tool only (I may add a TUI later but it probably won't ever have a GUI if that matters). Anyway if you check it out and have any feedback whether positive or negative that would be cool
I have no idea why people recommend this to people who aren't actually deep into tech and linux already.

Published May 10, 2026, 12:30 PM EDT
His love of PCs and their components was born out of trying to squeeze every ounce of performance out of the family computer. Tinkering with his own build at age 10 turned into building PCs for friends and family, fostering a passion that would ultimately take shape as a career path.
Besides being the first call for tech support for those close to him, Ty is a computer science student, with his focus being cloud computing and networking. He also competed in semi-pro Counter-Strike for 8 years, making him intimately familiar with everything to do with peripherals.
Sign in to your XDA account
In March 2026, Linux crossed five percent of Steam's user base for the first time, an all-time high for an operating system that spent two decades as a novelty when it came to any kind of gaming. Microsoft's end-of-support deadline for Windows 10 last October pushed many users to look at alternatives, and the Steam Deck has quietly turned millions of people into Linux gamers without them really thinking about it, leading to more widespread adoption on desktop machines.
Most of that progress used to happen inside a piece of software called Wine, the translation layer that convinces Windows games they're running on Windows. Valve's tuned version of Wine, called Proton, is what makes Steam Play and the Steam Deck work. For years, every meaningful improvement to Linux gaming came from changes to Wine and Proton themselves. That's still true, but increasingly the most important changes are happening one layer deeper, inside the Linux kernel. The latest example of that is something called NTSYNC, a kernel-level driver that has offered great performance gains over previous versions of Wine, and is loaded by default on every Steam Deck that's up-to-date.
NTSYNC is a small piece of driver added directly to the Linux kernel that gives it a native implementation of a set of Windows-specific tools that games depend on to coordinate themselves.
Modern games juggle dozens of things at once. While you're playing, your CPU manages the rendering pipeline, loading assets, running physics, processing audio, handling AI NPC routines, and tracking inputs, all in parallel across multiple cores. All those jobs constantly have to coordinate so they don't trip over each other.
Quiz
8 Questions · Test Your Knowledge
From a Finnish student's side project to powering the world — how well do you know the story of Linux?
OriginsKernelDistrosPioneersMilestones
Begin
01 / 8
Origins
In what year did Linus Torvalds first announce the Linux kernel to the world?
A1989B1991C1993D1995
Correct! Linus Torvalds posted his now-famous message to the comp.os.minix newsgroup on August 25, 1991, describing Linux as 'just a hobby' project. Few could have predicted it would one day run the majority of the world's servers and smartphones.
Not quite — Torvalds made his announcement in 1991. He was a 21-year-old computer science student at the University of Helsinki at the time, and his modest post described the project as something that 'won't be big and professional' like GNU.
Continue
02 / 8
Pioneers
Which university was Linus Torvalds attending when he created the first version of the Linux kernel?
AStockholm UniversityBAalto UniversityCUniversity of HelsinkiDMIT
Correct! Torvalds was studying at the University of Helsinki in Finland when he began working on Linux as a personal project, initially inspired by MINIX, a small Unix-like system used for educational purposes.
Not quite — Torvalds was a student at the University of Helsinki in Finland. He started Linux partly out of frustration with the limitations of MINIX, which his professor Andrew Tanenbaum had designed deliberately to be simple for teaching.
Continue
03 / 8
Kernel
What operating system primarily inspired Linus Torvalds to create the Linux kernel?
AMS-DOSBMINIXCBSD UnixDSolaris
Correct! MINIX, created by professor Andrew Tanenbaum, was the direct inspiration for Linux. Torvalds used MINIX on his new Intel 386 PC but found it too restricted for his needs, which pushed him to write his own kernel.
Not quite — the answer is MINIX. Torvalds was using MINIX when he started Linux, and even held a famous online debate with its creator Andrew Tanenbaum about kernel design philosophy, specifically monolithic versus microkernel architectures.
Continue
04 / 8
Milestones
What was the version number of the first publicly released Linux kernel in 1991?
A0.01B0.1C1.0D0.99
Correct! Linux version 0.01 was the first kernel Torvalds released publicly in September 1991. It was a rough, early build that could only run on Intel 386 hardware and had very limited functionality, but it marked the true beginning of the Linux project.
Not quite — the first public release was version 0.01 in September 1991. The kernel didn't reach version 1.0 until March 1994, by which point it had grown significantly in capability and had attracted contributions from developers around the world.
Continue
05 / 8
Distros
Which Linux distribution, first released in 1993, is one of the oldest still actively maintained today?
AUbuntuBFedoraCSlackwareDDebian
Correct! Slackware, created by Patrick Volkerding, was first released in July 1993, making it one of the oldest surviving Linux distributions. It is known for its simplicity and Unix-like philosophy, and it continues to be maintained to this day.
Not quite — the answer is Slackware, released in 1993 by Patrick Volkerding. While Debian was also founded in 1993, Slackware narrowly edges it out as the older release. Ubuntu didn't arrive until 2004, and Fedora launched in 2003.
Continue
06 / 8
Origins
The GNU Project, which provided many tools that paired with the Linux kernel, was founded by which developer?
AEric RaymondBRichard StallmanCBruce PerensDIan Murdock
Correct! Richard Stallman founded the GNU Project in 1983 with the goal of creating a completely free Unix-like operating system. When the Linux kernel appeared in 1991, it filled the missing piece GNU needed, and the combination became what many call GNU/Linux.
Not quite — it was Richard Stallman who founded the GNU Project in 1983. Stallman is also known for creating the GPL (GNU General Public License) and founding the Free Software Foundation, two pillars that shaped the legal and philosophical foundation of free software.
Continue
07 / 8
Milestones
Which company released a landmark commercial Linux distribution in 1994, helping bring Linux into the enterprise world?
ACanonicalBSUSECRed HatDMandriva
Correct! Red Hat released its first Linux distribution in 1994 and became one of the most influential commercial Linux companies in history. It pioneered the enterprise Linux market and was eventually acquired by IBM in 2019 for approximately $34 billion.
Not quite — Red Hat is the answer. Founded by Marc Ewing and Bob Young, Red Hat helped prove that companies could build sustainable businesses around open-source software. SUSE Linux also launched in 1994, making it a close rival, but Red Hat became the more globally dominant enterprise force.
Continue
08 / 8
Distros
Ubuntu Linux, one of the most popular desktop distributions, is based on which other Linux distribution?
AArch LinuxBFedoraCDebianDGentoo
Correct! Ubuntu is based on Debian and was first released in October 2004 by Mark Shuttleworth's company Canonical. It was designed to make Linux more accessible to everyday users, and its six-month release cycle and long-term support versions made it a favorite for both desktops and servers.
Not quite — Ubuntu is built on top of Debian. Debian itself was founded in 1993 by Ian Murdock and is known for its strict commitment to free software and stability. Ubuntu inherits Debian's package management system (APT and .deb packages) but adds its own user-friendly layer on top.
See My Score
Challenge Complete
/ 8
Thanks for playing!
Try Again
Windows handles this coordination by using a specific set of mechanisms, and before NTSYNC, Wine had to mimic these mechanisms using things like esync and fsync, which both worked, but didn't always match Windows exactly. NTSYNC builds these mechanisms straight into the Linux kernel for the first time, and it means Wine doesn't have to emulate anything anymore. The developer-facing API calls don't actually change, Linux just knows how to answer them natively.

NTSYNC isn't the first time Linux has gained a new feature specifically because Windows games needed it. A few years back, Linux added a way for software to wait on several events at once, which is something Windows had built in for decades, but Linux didn't. Wine had been working around the gap with awkward tricks until the kernel finally got native support.
This work is driven by Valve, by CodeWeavers (the company that employs many of the core Wine developers, including NTSYNC's author Elizabeth Figura), and by a steady stream of contributors who want Linux to be a real gaming platform without depending on out-of-ecosystem patches forever.

The headline performance gains look great, but they need some context. The eye-catching 40 to 200 percent FPS gains cited in NTSYNC's original benchmarks were measured against unmodified upstream Wine, which almost nobody uses to play games on Linux anymore. Most Linux gamers, including every Steam Deck owner, use Proton, which already has fsync. Compared to fsync, NTSYNC's performance gains are far more modest. The games that benefit most from the change to NTSYNC are games that were really struggling before. Anything that was running at decent framerates beforehand is still going to run fine.
Linux is a completely different beast than it was a decade ago.

Pierre-Loup Griffais, an engineer at Valve, has gone on the record to say that fsync was already fast enough, and despite that, Valve still shipped NTSYNC in stable SteamOS in March anyway, which speaks to the fact that fsync is still a workaround at its core, and can be the cause of issues outside of poor raw FPS.
These old workarounds got subtle edge cases wrong in ways that produced occasional hitches, deadlocks, or weird behavior in specific games, which are bugs that don't show up on benchmark charts but can absolutely ruin individual experiences. NTSYNC fixes those at the source by matching Windows behavior exactly, and that means as soon as your favorite distro moves to the new kernel version, whether it be Bazzite, CachyOS, Fedora, or a flavor of Ubuntu, they all get this much-needed fix.
Valve's full SteamOS release will change PC gaming again, and here are some of the most important ways.
Linux has grown so much in the gaming department. Where there once was nothing but clever Wine patches and community workarounds now lies support from gaming behemoths like Valve, driving changes to the Linux kernel itself. NTSYNC won't be the last time a piece of Windows gets rebuilt inside Linux because gamers needed it, and with more than five percent of Steam's user base now running Linux, the incentive to keep doing it has never been stronger.
Most egregious problem is that steam games start in a strange window rather than full screen and you have to press a weird key combo to fix it.
Nvidia based Acer nitro FWIW your mileage may vary
That's not "more accurately", that's just a completely different thing. When I'm on Mac, my muscle memory is thrown off. I'll be typing and my ctrl+s, alt+tab, win+4, ctrl+left* all cause wildly unpredictable (to me) things. I'm currently using Linux, and all of those things work how I expect (with a tiny asterisk on win+#). When I want a control panel, I press the windows button on my keyboard to open something functionally equivalent to the start menu, and open System Settings to get something functionally equivalent to the control panel.
I have no doubt that I could learn the deep differences between Windows and Mac over time, but the initial muscle memory causes me stress before I get to that point. When I switch to Linux I don't have that stress, and so I've been comfortably learning those differences.
* - save, switch to the previously in-focus window, switch to the 4th program on the taskbar, move the cursor one word to the left
Your definition of great performance is not mine, but it’s fantastic to watch Linux users continue to hand wave away real issues whilst continually claiming the same or better performance across the board, which is provably false.
> but has no real gains in how fun games are.
It absolutely does for me. Modern displays are absolutely dogshit. I won’t play at anything less than 144hz, as much as I can I aim for 200hz and I want that with consistent frame times.
Gaming moved for a lot of us from 'now I have 5 hours or whole weekend to gaming if I want to' to mere blips here and there, which need to be as frictionless as poasible.
Which is great - it means we are doing something actually meaningful and more worthy in our lives. But it also means I will never have enough time for such fiddling. I am fine with it, as much as I can be, but lets be honest to ourselves here.
Practically most will support access via both, but for different reasons. For example, page faults (which the software cannot possibly predict) are going to be exception-mediated, but syscalls (which the software asks for) are triggered via an interrupt.
It's the same tool the person you were replying to was pointing at via the Arch wiki. It's pretty standard. I'd expect most distros to support it by now.
We used a set of INT instructions in well-known low memory addresses that all jumped to the same place. We had an ASM file that you linked with, that had sixteen different address combinations for each.
The common entry point would look back on the stack and calculate from the return address which entry point had been called, and run the appropriate kernel call. We called it the CS:IP hack.
In the context of this post, the DOS INT10 and INTx(I forget) required the caller to load registers with the desired system call number, then perform the trap instruction in their code. Fortunately CTOS didn't need those particular software interrupts, so I could implement them for my purposes.
So if you need to persist changes into the lower layers, I think you may need to do tricks like taking snapshots and then swapping the bind mount (maybe with some diffing logic) or some other offline methods.
Every time I get something mid range or second hand I feel good about what a good deal I got, and how I'm getting 98% of the features for 40% of the price, and how realistically as soon as you stop pixel peeping screenshots, you won't even notice your settings are on High instead of Ultra. You just take in the story, the sound design, and the actual game.
The game story, gameplay elements, and such have become secondary to the real hobby of consumerism. If people could have fun gaming 20 years ago, there is no reason it isn't possible to have just as much fun gaming on low to mid range hardware today.
Where are the hordes of kids like us back then who were content with the afternoons, evenings and wee early morning hours of endless fiddling? What I realize now is those years spent fiddling sharpened our debugging senses in both ineffable and tractable ways.
A larger proportion of the juniors I see coming through the corporate halls these days than I remember from even 10 years ago do not have that knack for fiddling, nor history when it comes up. And it shows in their debugging temperament. LLM's are making this worse.
1. https://www.reddit.com/r/PleX/comments/eoa03e/psa_100_mbps_i...
The routine at 0xFFD00 could then enter protected mode and use the code segment to build the index into a table of entry points: FFD0 goes to index 0, FFCF goes to index 1, and so on. But for extra kicks, the address isn't actually pointing to valid code. It points to a random "c" character in the BIOS, which is an ARPL instruction - which in turn is invalid in v8086 mode and therefore invokes the undefined opcode exception handler. The exception handler, which handily enough is already running in protected mode, then takes care of doing the 32-bit call.
Related: https://devblogs.microsoft.com/oldnewthing/20041215-00/?p=37...
Also described here: https://news.ycombinator.com/item?id=45283085
You’re projecting. I think I’ve got what I enjoy from my hobby figured out after 35+ years, but thanks anyway.
> The game story, gameplay elements, and such have become secondary to the real hobby of consumerism.
You’re projecting.
> If people could have fun gaming 20 years ago
I didn’t have to endure sample and hold slop 20 years ago, now I do. You may accept or tolerate it, I am under no requirement to do so, nor live in a world where I must accept a significant performance loss is “ok” in any circumstance.
If I wanted less performance, I’d buy something with less performance to begin with.
The hobby of optimising your gaming desktop is a related but different hobby to actually playing games.
For all the issues people claim to have with iOS or Android, they really "just work" compared to the shit we had to deal with back in the day. And I don't even mean bugs, but UX just wasn't as sleek.
I can find a pdf of the TTRPG I'm playing that's hidden deep in an iCloud drive by simply opening spotlight an typing the approximate name. And the same works on my iPhone. Apps that create documents for me hide their file structure, because it's all abstracted away from me. It works, and I don't have to think about it as much.
You still have kids that start fiddling with tech, but only out of clear interest. Not as a necessity.
mov ah, 2
mov dl,7
Ahhh.. probably my first program. Don't forget the int 20 at the end! It was beeping great. Still never unlocked the mysteries of those TSR programs though.
It's much harder to step back and realise you don't need the new thing most of the time. Sure if you have a 15+ year old desktop and you can't run the new games at all then an upgrade could be good, but I'd guess most hardware purchases come from people who already have great hardware.
So, what TSRs would do is overwrite one or more interrupts to point to a routine that would check if the system call in question was one it wanted to handle (eg, to add a hotkey it would grab the keyboard handler and check for a special set of keys before passing control back to the normal handler). Once that was fine, it would call the TSR system call and control would be passed back to the OS with the hook still in place
I have very specific requirements for motion clarity in games on modern displays. Older display technologies like CRTs and plasmas achieved this naturally through the way they operated. Most modern sample-and-hold displays do not.
You may not notice or be affected by that difference, which is fine. Couldn’t be more thrilled for you, however I am affected. Anything below 120Hz on a sample-and-hold display causes noticeable discomfort for me, and for a long time I stopped gaming entirely because I couldn’t work out why playing anything had seemingly overnight become so bad to play from a comfort perspective. Eventually I realised the issue started when I moved away from CRTs and plasma TVs to modern sample and hold displays.
I was only able to comfortably return to gaming by using very fast displays at 120Hz minimum, preferably 240Hz, because that gets closer to the motion quality I was used to from years of using PC CRTs. For games locked to 60Hz or below, I still prefer playing them on a CRT for exactly that reason and I own a number of CRTs for this reason.