Play games which are beyond that: dota2, cs2 for instance.
On linux, there is a new syscall which allows a process to mmap into itself the pages of another process (I guess ~same effective UID and GID). That is more than enough to give hell to cheats...
But any of that can work only with a permanent and hard working "security" team. If some game devs do not want to do that, they should keep their game offline.
Now industry propaganda has gamers installing them voluntarily.
The article doesn’t go too in depth on the actually interesting things modern anticheats do.
In addition:
- you can’t really expect .text section of game/any modules except maybe your own to be 100% matching one on disk, because overlays will hook stuff like render crap (fun fact for you: Steam will also aggressively hook various WinAPI stuff presumably for VAC, at least on CS2)
Mucking about in the kernel basically bypasses the entire security and stability model of the OS. And this is not theoretical, people have been rooted through buggy anticheats software, where the game sent malicious calls to the kernel, and hijacked to anti cheat to gain root access.
Even in a more benign case, people often get 'gremlins', weird failures and BSOD due to some kernel apis being intercepted and overridden incorrectly.
The solution here is to establish root of trust from boot, and use the OSes sandboxing features (like Job Objects on NT and other stuff). Providing a secure execution environment is the OS developers' job.
Every sane approach to security relies on keeping the bad guys out, not mitigating the damage they can do once they're in.
It'd be really interesting to see what would happen - for instance, what fraction of players would pick each pool during the first few weeks after launch, and then how many of them would switch after? What about players who joined a few months or a year after launch?
Unfortunately, pretty much the only company that could make this work is Valve, because they're the only one who actually cares for players and is big enough that they could gather meaningful data. And I don't think that even Valve will see enough value in this to dedicate the substantial resources it'd take to try to implement.
Kernel anticheat does work. It takes 5 seconds to look at Valve's record of both VAC (client based, signature analysis) and VACNet (machine learning) to know the cheating problem with those technologies is far more prevalent than platforms that use kernel level anticheat (e.g. FACEIT, vanguard). Of course, KLAC is not infallible - this is known. Yes, cheats do (and will continue to) exist. However, it greatly raises the bar to entry. Kernel cheats that are undetected by FACEIT or vanguard are expensive, and often recurring subscriptions (some even going down to intervals as low as per day or week). Cheat developers will 99% of the time not release these publicly because it would be picked up and detected instantly where they could be making serious money selling privately. As mentioned in the article, with DMA devices you're looking at a minimum of a couple hundred dollars just for hardware, not including the cheat itself.
These are video games. No one is forcing you to play them. If you are morally opposed to KLAC, simply don't play the game. If you don't want KLAC, prepare to have your experience consistently and repeatedly ruined.
Modern cheats use hypervisors or just compromise hyper-v and because hyper-v protects itself so it automatically protects your cheat.
Another option that is becoming super popular is bios patching, most motherboards will never support boot guard and direct bios flashing will always be an option since the chipset fuse only protects against flashing from the chipset.
DMA is probably the most popular by far with fusers. However, the cost of good ones has been increasing due to vanguard fighting the common methods which is bleeding into other anticheats (some EAC versions and ricochet).
These are not assumptions, every time anticheats go up a level so do the cheats. In the end the weakest link will be exploited and it doesn't matter how sophisticated your anticheat is.
What does make cheat developers afraid is AI, primarily in overwatch. It's quite literally impossible to cheat anymore (in a way that disturbs normal players for more than a few games) and they only have a usermode anticheat! They heavily rely on spoofing detection and gameplay analysis including community reports. Instead of detecting cheats, they detect cheaters themselves and then clamp down on them by capturing as much information about their system as possible (all from usermode!!!).
Of course you could argue that you could just take advantage that they have to go through usermode to capture all this information and just sit in the kernel, but hardware attestation is making this increasily more difficult.
The future is usermode anticheats and gameplay analysis, drop kernel mode anticheats.
No secure boot doesn't work if you patch SMM in bios, you run before TPM attestation happens.
Cheaters are by definition anomalies, they operate with information regular players do not have. And when they use aimbots they have skills other players don't have.
If you log every single action a player takes server-side and apply machine learning methods it should be possible to identify these anomalies. Anomaly detection is a subfield of machine learning.
It will ultimately prove to be the solution, because only the most clever of cheaters will be able to blend in while still looking like great players. And only the most competently made aimbots will be able to appear like great player skills. In either of those cases the cheating isn't a problem because the victims themselves will never be sure.
There is also another method that the server can employ: Players can be actively probed with game world entities designed for them to react to only if they have cheats. Every such event would add probability weight onto the cheaters. Ultimately, the game world isn't delivered to the client in full so if done well the cheats will not be able to filter. For example: as a potential cheater enters entity broadcast range of a fake entity camping in an invisible corner that only appears to them, their reaction to it is evaluated (mouse movements, strategy shift, etc). Then when it disappears another evaluation can take place (cheats would likely offer mitigations for this part). Over time, cheaters will stand out from the noise, most will likely out themselves very quickly.
Okay, chill. I'm willing to believe that anti-cheat software is "sophisticated", but intercepting system calls doesn't make it so. There is plenty of software that operates at elevated privilege and runs transparently while other software is running, while intentionally being unsophisticated. It's called a kernel subsystem.
How about this: Instead of third-party companies installing their custom code to fuck with my operating system,
How about just having the OS offer an API that a game can request to reboot the OS into "console mode": A single-user, single-application mode that just runs that game only.
Similar to how consoles work.
That mode could be reserved for competitive ranked multiplayer only.
If you got RCE in the game itself, it's effectively game over for any data you have on the computer.
That’s not true at all in the field of cybersecurity in general, and I have doubts that it’s true in the subset of the field that has to do with anticheat.
Himata is correct, too. After DMA-based stuff, it'll be CPU debugging mode exploits like DCI-OOB, some of which can be made detectable in kernel mode; or, stealthier hypervisors.
They solve a real problem (cheats running at higher privilege levels), but at the same time they introduce a massive trusted component into the OS. You're basically asking users to install something that behaves very much like a rootkit, just with a defensive purpose.
This seems much more doable today than in the past as machines boot in moments. Switching from secure "xbox mode" to free form PC mode, would be barely a bump.
Now, I see one major difference, heterogenous vs homogenous hardware (and the associated drivers that come with that). In the xbox world, one is dealing with a very specific hardware platform and a single set of drivers. In the PC world (even in a trusted secure boot path), one is dealing with lots of different hardware and drivers that can all have their exploits. If users are more easily able to modify their PCs and set of drivers one, I'd imagine serious cheaters would gravitate to combinations they know they can exploit to break the secure/trusted boot boundary.
I wonder if there are other problems.
https://www.vice.com/en/article/fs-labs-flight-simulator-pas...
Company decides to "catch pirates" as though it was police. Ships a browser stealer to consumers and exfiltrates data via unencrypted channels.
https://old.reddit.com/r/Asmongold/comments/1cibw9r/valorant...
https://www.unknowncheats.me/forum/anti-cheat-bypass/634974-...
Covertly screenshots your screen and sends the image to their servers.
https://www.theregister.com/2016/09/23/capcom_street_fighter...
https://twitter.com/TheWack0lian/status/779397840762245124
https://fuzzysecurity.com/tutorials/28.html
https://github.com/FuzzySecurity/Capcom-Rootkit
Yes, a literal privilege escalation as a service "anticheat" driver.
Trusting these companies is insane.
Every video game you install is untrusted proprietary software that assumes you are a potential cheater and criminal. They are pretty much guaranteed to act adversarially to you. Video games should be sandboxed and virtualized to the fullest possible extent so that they can access nothing on the real system and ideally not even be able to touch each other. We really don't need kernel level anticheat complaining about virtualization.
I was not aware that attackers could potentially manipulate attestation! How could that be done? That would seemingly defeat the point of remote attestation.
I really thought this might change over time given strong desire for useful attestation by major actors like banks and media companies, but apparently they cannot exert the same level of influence on the PC industry as they have on the mobile industry.
Community moderation simply doesn't work at scale for anticheat - in level of effort required, root cause detection, and accuracy/reliability.
I rather play with cheaters here and there than install some kernel level malware on machine just to make sure EA, Activision, et al can keep raking in money hand over fist.
Or better yet, I can just play on console where there is no cheating that I have ever seen.
Hot take: It's also totally unnecessary. The entire arms race is stupid.
Proper anti-cheat needs to be 0% invasive to be effective; server-side analysis plus client-side with no special privileges.
The problem is laziness, lack of creativity and greed. Most publishers want to push games out the door as fast as possible, so they treat anti-cheat as a low-budget afterthought. That usually means reaching for generic solutions that are relatively easy to implement because they try to be as turn-key as possible.
This reductionist "Oh no! We have to lock down their access to video output and raw input! Therefore, no VMs or Linux for anyone!" is idiotic. Especially when it flies in the face of Valve's prevailing trend towards Linux as a proper gaming platform.
There's so many local-only, privacy-preserving anti-cheat approaches that can be done with both software and dirt cheap hardware peripherals. Of course, if anyone ever figures that out, publishers will probably twist it towards invasive harvesting of data.
I'd love to be playing Marathon right now, but Bungie just wholesale doesn't support Linux nor VMs. Cool. That's $40 they won't get from me, multiply by about 5-10x for my friends. Add in the negative reviews that are preventing the game's Steam rating from reaching Overwhelmingly Positive and the damage to sales is significant.
Not everyone enjoys that, and that’s fine, but acting like it’s somehow unnatural or pointless feels way off.
This is roughly what Valve does for CS2. But, as far as I understand, it's not very effective and unfortunately still results in higher cheating rates than e.g. Valorant.
Well it's definitely not game developer written kernel anti-cheat on consoles.
They also have VM checks. I "accidentally" logged into MGM from a virtual machine. They put my account on hold and requested I write a "liability statement" stating I would delete all "location altering software" and not use it again. (Really!)
The harder thing probably is getting a dataset for “all x64/ARM64 Windows drivers that aren’t already considered vulnerable”.
Also it depends what’s considered a vulnerability here.
looking at cards is a way easier problem than rendering a 3d world with other players bouncing around. I imagine you could just send the card player basially a screenshot of what you want them to see and give them no other data to work with and that would mostly solve cheating.
But gambling can be way more complicated than just looking at cards so maybe there's a lot more to it.
You do not need kernel access to make spyware that takes screenshots. You do not need a privileged service to read the user’s browser history.
You can do all of this, completely unprivileged on Windows. People always seem to conflate kernel access with privacy which is completely false. It would in fact be much harder to do any of these things from kernel mode.
Anti cheat don't run on modern console, game dev knoes that the latest firmware on a console is secure enough so that the console can't be tempered.
Defeating remote attestation will be a key capability in the future. We should be able to fully own our computers without others being able to discriminate against us for it.
I wouldn’t call BIOS patching “super popular”. That sounds like an admission that anti-cheat is working because running cheats now requires a lot of effort. Now that cheats are becoming more involved to run, it’s becoming less common to cheat.
When cheats were as simple as downloading a program and you were off to cheating, the barrier to entry was a lot lower. It didn’t require reboots or jumping through hoops. Anyone could do it and didn’t even have to invest much time into it.
Now that cheats are no longer an easy thing to do, a lot of would-be cheaters are getting turned off of the idea before they get far enough to cheat in a real game.
> Of course you could argue that you could just take advantage that they have to go through usermode to capture all this information and just sit in the kernel, but hardware attestation is making this increasily more difficult.
Didn’t the first half of your post just argue that these measures can be defeated and therefore you can’t rely on them?
I, myself, got two accounts banned and I was innocent. I managed to make it through support and got them unbanned but I'm fairly certain that many players didn't, because they seem to employ AI in their support.
So I'm a bit skeptical about that kind of behavioural bans. You risk banning a lot of dedicated players who happened to play differently from the majority and that tend to bring bad reputation. For example I no longer purchase yearly subscription, because I'm afraid of sudden ban and losing lots of unspent subscription time.
AKA the way that is easiest to detect, and the easiest way to claim that the game doesn't have cheaters. Behavioral analysis doesn't work with closet cheaters, and they corrupt the community and damage the game in much subtler ways. There's nothing worse than to know that the player you've competed with all this time had a slight advantage from the start.
It's almost the same as saying "you don't need a password on your phone" or something like that.
Not sure what your point is. Most of your post is inaccurate, DMA cheats represent the minority of cheats because they're very expensive and you need a second computer.
The thing about gaming is that it’s not acceptable to leave 5% performance on the table whereas for other uses it usually is.
Modern kernel anti-cheat systems are, without exaggeration, among the most sophisticated pieces of software running on consumer Windows machines. They operate at the highest privilege level available to software, they intercept kernel callbacks that were designed for legitimate security products, they scan memory structures that most programmers never touch in their entire careers, and they do all of this transparently while a game is running. If you have ever wondered how BattlEye actually catches a cheat, or why Vanguard insists on loading before Windows boots, or what it means for a PCIe DMA device to bypass every single one of these protections, this post is for you.
This is not a comprehensive or authoritative reference. It is just me documenting what I found and trying to explain it clearly. Some of it comes from public research and papers I have linked at the bottom, some from reading kernel source and reversing drivers myself. If something is wrong, feel free to reach out. The post assumes some familiarity with Windows internals and low-level programming, but I have tried to explain each concept before using it.
The fundamental problem with usermode-only anti-cheat is the trust model. A usermode process runs at ring 3, subject to the full authority of the kernel. Any protection implemented entirely in usermode can be bypassed by anything running at a higher privilege level, and in Windows that means ring 0 (kernel drivers) or below (hypervisors, firmware). A usermode anti-cheat that calls ReadProcessMemory to check game memory integrity can be defeated by a kernel driver that hooks NtReadVirtualMemory and returns falsified data. A usermode anti-cheat that enumerates loaded modules via EnumProcessModules can be defeated by a driver that patches the PEB module list. The usermode process is completely blind to what happens above it.
Cheat developers understood this years before most anti-cheat engineers were willing to act on it. The kernel was, for a long time, the exclusive domain of cheats. Kernel-mode cheats could directly manipulate game memory without going through any API that a usermode anti-cheat could intercept. They could hide their presence from usermode enumeration APIs trivially. They could intercept and forge the results of any check a usermode anti-cheat might perform.
The response was inevitable: move the anti-cheat into the kernel.
The escalation has been relentless. Usermode cheats gave way to kernel cheats. Kernel anti-cheats appeared in response. Cheat developers began exploiting legitimate, signed drivers with vulnerabilities to achieve kernel execution without loading an unsigned driver (the BYOVD attack). Anti-cheats responded with blocklists and stricter driver enumeration. Cheat developers moved to hypervisors, running below the kernel and virtualizing the entire OS. Anti-cheats added hypervisor detection. Cheat developers began using PCIe DMA devices to read game memory directly through hardware without ever touching the OS at all. The response to that is still being developed.
Each escalation requires the attacking side to invest more capital and expertise, which has an important effect: it filters out casual cheaters. A $30 kernel cheat subscription is accessible to many people. A custom FPGA DMA setup costs hundreds of dollars and requires significant technical knowledge to configure. The arms race, while frustrating for anti-cheat engineers, does serve the practical goal of making cheating expensive and difficult enough that most cheaters do not bother.
Four systems dominate the competitive gaming landscape:
BattlEye is used by PUBG, Rainbow Six Siege, DayZ, Arma, and dozens of other titles. Its kernel component is BEDaisy.sys, and it has been the subject of detailed public reverse engineering work, most notably by the secret.club researchers and the back.engineering blog.
EasyAntiCheat (EAC) is now owned by Epic Games and used in Fortnite, Apex Legends, Rust, and many others. Its architecture is broadly similar to BattlEye in its three-component design but differs significantly in implementation details.
Vanguard is Riot Games’ proprietary anti-cheat used in Valorant and League of Legends. It is notable for loading its kernel component (vgk.sys) at system boot rather than at game launch, and for its aggressive stance on driver allowlisting.
FACEIT AC is used for the FACEIT competitive platform for Counter-Strike. It is a kernel-level system with a well-regarded reputation in the competitive community for effective cheat detection, and has been the subject of academic analysis examining the architectural properties of kernel anti-cheat software more broadly.
The 2024 paper “If It Looks Like a Rootkit and Deceives Like a Rootkit” (presented at ARES 2024) analyzed FACEIT AC and Vanguard through the lens of rootkit taxonomy, noting that both systems share technical characteristics with that class of software: kernel-level operation, system-wide callback registration, and broad visibility into OS activity. The authors are careful to distinguish between technical classification and intent, explicitly acknowledging that these systems are legitimate software serving a defensive purpose. The paper’s contribution is primarily taxonomic rather than accusatory.
The underlying observation is simply that effective kernel anti-cheat requires the same OS primitives that malicious kernel software uses, because those primitives are what provide the visibility needed to detect cheats. Any sufficiently capable kernel anti-cheat will look like a rootkit under static behavioral analysis, because capability and intent are orthogonal at the kernel API level. This is a constraint imposed by Windows architecture, not a design choice unique to any particular anti-cheat vendor.
Modern kernel anti-cheats universally follow a three-layer architecture:
Kernel driver: Runs at ring 0. Registers callbacks, intercepts system calls, scans memory, enforces protections. This is the component that actually has the power to do anything meaningful.
Usermode service: Runs as a Windows service, typically with SYSTEM privileges. Communicates with the kernel driver via IOCTLs. Handles network communication with backend servers, manages ban enforcement, collects and transmits telemetry.
Game-injected DLL: Injected into (or loaded by) the game process. Performs usermode-side checks, communicates with the service, and serves as the endpoint for protections applied to the game process specifically.
The separation of concerns here is both architectural and security-motivated. The kernel driver can do things no usermode component can, but it cannot easily make network connections or implement complex application logic. The service can do those things but cannot directly intercept system calls. The in-game DLL has direct access to game state but runs in an untrustworthy ring-3 context.
IOCTLs (I/O Control Codes) are the primary communication mechanism between usermode and a kernel driver. A usermode process opens a handle to the driver’s device object and calls DeviceIoControl with a control code. The driver handles this in its IRP_MJ_DEVICE_CONTROL dispatch routine. The entire communication is mediated by the kernel, which means a compromised usermode component cannot forge arbitrary kernel operations - it can only make requests that the driver is programmed to service.
Named pipes are used for IPC between the service and the game-injected DLL. A named pipe is faster and simpler than routing everything through the kernel, and it allows the service to push notifications to the game component without polling.
Shared memory sections created with NtCreateSection and mapped into both the service process and the game process via NtMapViewOfSection allow high-bandwidth, low-latency data sharing. Telemetry data (input events, timing data) can be written to a shared ring buffer by the game DLL and read by the service without the overhead of IPC per event.
The distinction between boot-time and runtime driver loading is more significant than it might appear.
BattlEye and EAC load their kernel drivers when the game is launched. BEDaisy.sys and its EAC equivalent are registered as demand-start drivers and loaded via ZwLoadDriver from the service when the game starts. They are unloaded when the game exits.
Vanguard loads vgk.sys at system boot. The driver is configured as a boot-start driver (SERVICE_BOOT_START in the registry), meaning the Windows kernel loads it before most of the system has initialized. This gives Vanguard a critical advantage: it can observe every driver that loads after it. Any driver that loads after vgk.sys can be inspected before its code runs in a meaningful way. A cheat driver that loads at the normal driver initialization phase is loading into a system that Vanguard already has eyes on.
The practical implication of boot-time loading is also why Vanguard requires a system reboot to enable: the driver must be in place before the rest of the system initializes, which means it cannot be loaded after the fact without a restart.
Windows enforces Driver Signature Enforcement (DSE) on 64-bit systems, which requires that kernel drivers be signed with a certificate that chains to a trusted root and that the driver’s code integrity be verified at load time. This is implemented through CiValidateImageHeader and related functions in ci.dll. The kernel also enforces that driver certificates meet certain Extended Key Usage (EKU) requirements.
Anti-cheats handle signing in the obvious way: they pay for extended validation (EV) code signing certificates, go through Microsoft’s WHQL process for some components, or use cross-signing. The certificate requirements have tightened significantly over the years; Microsoft now requires EV certificates for new kernel drivers, and the kernel driver signing portal requires WHQL submission for drivers targeting Windows 10 and later in many cases.
The reason this matters for cheats is that DSE is a significant barrier. Without a signed driver or a way to bypass DSE, a cheat author cannot load arbitrary kernel code. BYOVD attacks (covered in section 7) are the primary mechanism for bypassing this restriction.
BattlEye’s architecture is well-documented through reverse engineering:
BEDaisy.sys is the kernel driver. It registers callbacks for process creation, thread creation, image loading, and object handle operations. It implements the actual scanning and protection logic.
BEService.exe (or BEService_x64.exe) is the usermode service. It communicates with BEDaisy.sys via a device object that the driver exposes. It handles network communication with BattlEye’s backend servers, receives detection results from the driver, and is responsible for ban enforcement (kicking the player from the game server).
BEClient_x64.dll is injected into the game process. BattlEye does not inject this via CreateRemoteThread in the traditional sense - it is loaded as part of game initialization, with the game’s cooperation. This DLL is responsible for performing usermode-side checks within the game process context: it verifies its own integrity, performs various environment checks, and serves as the target for protections that the kernel driver applies specifically to the game process.
The communication flow goes: BEDaisy.sys detects something suspicious, signals BEService.exe via an IOCTL completion or a shared memory notification, BEService.exe reports to BattlEye’s servers, the server decides on an action (kick/ban), and BEService.exe instructs the game to terminate the connection.
BattlEye’s three-component architecture: BEDaisy.sys at ring 0 communicates upward via IOCTLs to BEService.exe running as a SYSTEM service, which manages BEClient_x64.dll injected in the game process.
vgk.sys is notably more aggressive than the BattlEye driver in its scope. Because it loads at boot, it can intercept the driver load process itself. Vanguard maintains an internal allowlist of drivers that are permitted to co-exist with a protected game. Any driver not on this list, or any driver that fails integrity checks, can result in Vanguard refusing to allow the game to launch. This is an allowlist model rather than a blocklist model, which is architecturally much stronger.
vgauth.exe is the Vanguard service, which handles the communication between vgk.sys and Riot’s backend infrastructure.
This is the foundation of everything a kernel anti-cheat does. The Windows kernel exposes a rich set of callback registration APIs intended for security products, and anti-cheats use every one of them.
ObRegisterCallbacks is perhaps the single most important API for process protection. It allows a driver to register a callback that is invoked whenever a handle to a specified object type is opened or duplicated. For anti-cheat purposes, the object types of interest are PsProcessType and PsThreadType.
<table><tbody><tr><td><pre>1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 </pre></td><td><pre><span>OB_CALLBACK_REGISTRATION</span> <span>callbackReg</span> <span>=</span> <span>{</span><span>0</span><span>};</span> <span>OB_OPERATION_REGISTRATION</span> <span>opReg</span><span>[</span><span>2</span><span>]</span> <span>=</span> <span>{</span><span>0</span><span>};</span> <span>// Altitude string is required - must be unique per driver</span> <span>UNICODE_STRING</span> <span>altitude</span> <span>=</span> <span>RTL_CONSTANT_STRING</span><span>(</span><span>L"31001"</span><span>);</span> <span>// Monitor handle opens to process objects</span> <span>opReg</span><span>[</span><span>0</span><span>].</span><span>ObjectType</span> <span>=</span> <span>PsProcessType</span><span>;</span> <span>opReg</span><span>[</span><span>0</span><span>].</span><span>Operations</span> <span>=</span> <span>OB_OPERATION_HANDLE_CREATE</span> <span>|</span> <span>OB_OPERATION_HANDLE_DUPLICATE</span><span>;</span> <span>opReg</span><span>[</span><span>0</span><span>].</span><span>PreOperation</span> <span>=</span> <span>ObPreOperationCallback</span><span>;</span> <span>opReg</span><span>[</span><span>0</span><span>].</span><span>PostOperation</span> <span>=</span> <span>ObPostOperationCallback</span><span>;</span> <span>// Monitor handle opens to thread objects</span> <span>opReg</span><span>[</span><span>1</span><span>].</span><span>ObjectType</span> <span>=</span> <span>PsThreadType</span><span>;</span> <span>opReg</span><span>[</span><span>1</span><span>].</span><span>Operations</span> <span>=</span> <span>OB_OPERATION_HANDLE_CREATE</span> <span>|</span> <span>OB_OPERATION_HANDLE_DUPLICATE</span><span>;</span> <span>opReg</span><span>[</span><span>1</span><span>].</span><span>PreOperation</span> <span>=</span> <span>ObPreOperationCallback</span><span>;</span> <span>opReg</span><span>[</span><span>1</span><span>].</span><span>PostOperation</span> <span>=</span> <span>NULL</span><span>;</span> <span>callbackReg</span><span>.</span><span>Version</span> <span>=</span> <span>OB_FLT_REGISTRATION_VERSION</span><span>;</span> <span>callbackReg</span><span>.</span><span>OperationRegistrationCount</span> <span>=</span> <span>2</span><span>;</span> <span>callbackReg</span><span>.</span><span>Altitude</span> <span>=</span> <span>altitude</span><span>;</span> <span>callbackReg</span><span>.</span><span>RegistrationContext</span> <span>=</span> <span>NULL</span><span>;</span> <span>callbackReg</span><span>.</span><span>OperationRegistration</span> <span>=</span> <span>opReg</span><span>;</span> <span>NTSTATUS</span> <span>status</span> <span>=</span> <span>ObRegisterCallbacks</span><span>(</span><span>&</span><span>callbackReg</span><span>,</span> <span>&</span><span>gCallbackHandle</span><span>);</span> </pre></td></tr></tbody></table>
The pre-operation callback receives a POB_PRE_OPERATION_INFORMATION structure. The critical field is Parameters->CreateHandleInformation.DesiredAccess. The callback can strip access rights from the desired access by modifying Parameters->CreateHandleInformation.DesiredAccess before the handle is created. This is how anti-cheats prevent external processes from opening handles to the game process with PROCESS_VM_READ or PROCESS_VM_WRITE access.
When a cheat calls OpenProcess(PROCESS_VM_READ | PROCESS_VM_WRITE, FALSE, gameProcessId), the anti-cheat’s ObRegisterCallbacks pre-operation callback fires. The callback checks whether the target process is the protected game process. If it is, it strips PROCESS_VM_READ, PROCESS_VM_WRITE, PROCESS_VM_OPERATION, and PROCESS_DUP_HANDLE from the desired access. The cheat receives a handle, but the handle is useless for reading or writing game memory. The cheat’s ReadProcessMemory call will fail with ERROR_ACCESS_DENIED.
The IRQL for ObRegisterCallbacks pre-operation callbacks is PASSIVE_LEVEL, which means the callback can call pageable code and perform blocking operations (within reason).
ObCallbackDemo.sys in action. DebugView shows the driver stripping handle access rights, Target.exe is running with secret data in memory, and Verifier.exe fails to read it with Access Denied.
PsSetCreateProcessNotifyRoutineEx allows a driver to register a callback that fires on every process creation and termination event system-wide. The callback receives a PEPROCESS for the process, the PID, and a PPS_CREATE_NOTIFY_INFO structure containing details about the process being created (image name, command line, parent PID).
<table><tbody><tr><td><pre>1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 </pre></td><td><pre><span>VOID</span> <span>ProcessNotifyCallback</span><span>(</span> <span>PEPROCESS</span> <span>Process</span><span>,</span> <span>HANDLE</span> <span>ProcessId</span><span>,</span> <span>PPS_CREATE_NOTIFY_INFO</span> <span>CreateInfo</span> <span>)</span> <span>{</span> <span>if</span> <span>(</span><span>CreateInfo</span> <span>==</span> <span>NULL</span><span>)</span> <span>{</span> <span>// Process is terminating</span> <span>HandleProcessTermination</span><span>(</span><span>Process</span><span>,</span> <span>ProcessId</span><span>);</span> <span>return</span><span>;</span> <span>}</span> <span>// Process is being created</span> <span>if</span> <span>(</span><span>CreateInfo</span><span>-></span><span>ImageFileName</span> <span>!=</span> <span>NULL</span><span>)</span> <span>{</span> <span>// Check if this is a known cheat process</span> <span>if</span> <span>(</span><span>IsKnownCheatProcess</span><span>(</span><span>CreateInfo</span><span>-></span><span>ImageFileName</span><span>))</span> <span>{</span> <span>// Set an error status to prevent the process from launching</span> <span>CreateInfo</span><span>-></span><span>CreationStatus</span> <span>=</span> <span>STATUS_ACCESS_DENIED</span><span>;</span> <span>}</span> <span>}</span> <span>}</span> <span>PsSetCreateProcessNotifyRoutineEx</span><span>(</span><span>ProcessNotifyCallback</span><span>,</span> <span>FALSE</span><span>);</span> </pre></td></tr></tbody></table>
Notably, the Ex variant (introduced in Windows Vista SP1) provides the image file name and command line, which the original PsSetCreateProcessNotifyRoutine does not. The callback is called at PASSIVE_LEVEL from a system thread context.
Anti-cheats use this callback to detect cheat tool processes spawning on the system. If a known cheat launcher or injector process is created while the game is running, the anti-cheat can immediately flag this. Some implementations also set CreateInfo->CreationStatus to a failure code to outright prevent the process from launching.
PsSetCreateThreadNotifyRoutine fires on every thread creation and termination system-wide. Anti-cheats use it specifically to detect thread creation in the protected game process. When a new thread is created in the game process, the callback fires and the anti-cheat can inspect the thread’s start address.
<table><tbody><tr><td><pre>1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 </pre></td><td><pre><span>VOID</span> <span>ThreadNotifyCallback</span><span>(</span><span>HANDLE</span> <span>ProcessId</span><span>,</span> <span>HANDLE</span> <span>ThreadId</span><span>,</span> <span>BOOLEAN</span> <span>Create</span><span>)</span> <span>{</span> <span>if</span> <span>(</span><span>!</span><span>Create</span><span>)</span> <span>return</span><span>;</span> <span>if</span> <span>(</span><span>IsProtectedProcess</span><span>(</span><span>ProcessId</span><span>))</span> <span>{</span> <span>PETHREAD</span> <span>Thread</span><span>;</span> <span>PsLookupThreadByThreadId</span><span>(</span><span>ThreadId</span><span>,</span> <span>&</span><span>Thread</span><span>);</span> <span>// Get the thread start address - this is stored in ETHREAD</span> <span>PVOID</span> <span>StartAddress</span> <span>=</span> <span>PsGetThreadWin32StartAddress</span><span>(</span><span>Thread</span><span>);</span> <span>// Check if start address is within a known module</span> <span>if</span> <span>(</span><span>!</span><span>IsAddressInKnownModule</span><span>(</span><span>StartAddress</span><span>))</span> <span>{</span> <span>// Thread started at an address with no backing module - suspicious</span> <span>FlagSuspiciousThread</span><span>(</span><span>Thread</span><span>,</span> <span>StartAddress</span><span>);</span> <span>}</span> <span>ObDereferenceObject</span><span>(</span><span>Thread</span><span>);</span> <span>}</span> <span>}</span> </pre></td></tr></tbody></table>
The call to PsLookupThreadByThreadId retrieves the ETHREAD pointer for the new thread. PsGetThreadWin32StartAddress returns the Win32 start address as seen by the process, which is distinct from the kernel-internal start address. Once finished with the thread object, ObDereferenceObject releases the reference acquired by PsLookupThreadByThreadId.
A thread created in the game process whose start address does not fall within any loaded module’s address range is a strong indicator of injected code. Legitimate threads start inside module code. An injected thread typically starts in shellcode or manually mapped PE code that has no module backing.
PsSetLoadImageNotifyRoutine fires whenever an image (DLL or EXE) is mapped into any process. It provides the image file name and a PIMAGE_INFO structure containing the base address and size.
<table><tbody><tr><td><pre>1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 </pre></td><td><pre><span>VOID</span> <span>LoadImageCallback</span><span>(</span> <span>PUNICODE_STRING</span> <span>FullImageName</span><span>,</span> <span>HANDLE</span> <span>ProcessId</span><span>,</span> <span>PIMAGE_INFO</span> <span>ImageInfo</span> <span>)</span> <span>{</span> <span>if</span> <span>(</span><span>IsProtectedProcess</span><span>(</span><span>ProcessId</span><span>))</span> <span>{</span> <span>// A DLL was loaded into the protected game process</span> <span>// Verify it's on the allowlist or check its signature</span> <span>if</span> <span>(</span><span>!</span><span>IsAllowedModule</span><span>(</span><span>FullImageName</span><span>))</span> <span>{</span> <span>// Log and potentially act on this</span> <span>ReportSuspiciousModule</span><span>(</span><span>FullImageName</span><span>,</span> <span>ImageInfo</span><span>-></span><span>ImageBase</span><span>);</span> <span>}</span> <span>}</span> <span>}</span> </pre></td></tr></tbody></table>
This is IRQL PASSIVE_LEVEL. The callback fires after the image is mapped but before its entry point executes, which gives the anti-cheat an opportunity to scan the image before any of its code runs.
CmRegisterCallbackEx registers a callback for registry operations. Anti-cheats use this to monitor for registry modifications that might indicate cheats configuring themselves or attempting to modify anti-cheat settings.
<table><tbody><tr><td><pre>1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 </pre></td><td><pre><span>NTSTATUS</span> <span>RegistryCallback</span><span>(</span> <span>PVOID</span> <span>CallbackContext</span><span>,</span> <span>PVOID</span> <span>Argument1</span><span>,</span> <span>// REG_NOTIFY_CLASS</span> <span>PVOID</span> <span>Argument2</span> <span>// Operation-specific data</span> <span>)</span> <span>{</span> <span>REG_NOTIFY_CLASS</span> <span>notifyClass</span> <span>=</span> <span>(</span><span>REG_NOTIFY_CLASS</span><span>)(</span><span>ULONG_PTR</span><span>)</span><span>Argument1</span><span>;</span> <span>if</span> <span>(</span><span>notifyClass</span> <span>==</span> <span>RegNtPreSetValueKey</span><span>)</span> <span>{</span> <span>PREG_SET_VALUE_KEY_INFORMATION</span> <span>info</span> <span>=</span> <span>(</span><span>PREG_SET_VALUE_KEY_INFORMATION</span><span>)</span><span>Argument2</span><span>;</span> <span>// Check if someone is modifying anti-cheat registry keys</span> <span>if</span> <span>(</span><span>IsProtectedRegistryKey</span><span>(</span><span>info</span><span>-></span><span>Object</span><span>))</span> <span>{</span> <span>return</span> <span>STATUS_ACCESS_DENIED</span><span>;</span> <span>}</span> <span>}</span> <span>return</span> <span>STATUS_SUCCESS</span><span>;</span> <span>}</span> </pre></td></tr></tbody></table>
A minifilter driver sits in the file system filter stack and intercepts IRP requests going to and from file system drivers. Anti-cheats use minifilters to monitor for cheat file drops (writing known cheat executables or DLLs to disk), to detect reads of their own driver files (which might indicate attempts to patch the on-disk driver binary before it is verified), and to enforce file access restrictions.
<table><tbody><tr><td><pre>1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 </pre></td><td><pre><span>FLT_PREOP_CALLBACK_STATUS</span> <span>PreOperationCallback</span><span>(</span> <span>PFLT_CALLBACK_DATA</span> <span>Data</span><span>,</span> <span>PCFLT_RELATED_OBJECTS</span> <span>FltObjects</span><span>,</span> <span>PVOID</span> <span>*</span><span>CompletionContext</span> <span>)</span> <span>{</span> <span>if</span> <span>(</span><span>Data</span><span>-></span><span>Iopb</span><span>-></span><span>MajorFunction</span> <span>==</span> <span>IRP_MJ_WRITE</span><span>)</span> <span>{</span> <span>// Check if the target file is a known cheat file name</span> <span>PFLT_FILE_NAME_INFORMATION</span> <span>nameInfo</span><span>;</span> <span>FltGetFileNameInformation</span><span>(</span><span>Data</span><span>,</span> <span>FLT_FILE_NAME_NORMALIZED</span><span>,</span> <span>&</span><span>nameInfo</span><span>);</span> <span>if</span> <span>(</span><span>IsKnownCheatFileName</span><span>(</span><span>&</span><span>nameInfo</span><span>-></span><span>Name</span><span>))</span> <span>{</span> <span>Data</span><span>-></span><span>IoStatus</span><span>.</span><span>Status</span> <span>=</span> <span>STATUS_ACCESS_DENIED</span><span>;</span> <span>FltReleaseFileNameInformation</span><span>(</span><span>nameInfo</span><span>);</span> <span>return</span> <span>FLT_PREOP_COMPLETE</span><span>;</span> <span>}</span> <span>FltReleaseFileNameInformation</span><span>(</span><span>nameInfo</span><span>);</span> <span>}</span> <span>return</span> <span>FLT_PREOP_SUCCESS_NO_CALLBACK</span><span>;</span> <span>}</span> </pre></td></tr></tbody></table>
FltGetFileNameInformation retrieves the normalized file name for the target of the operation. FltReleaseFileNameInformation must be called to release the reference when done. Minifilter callbacks typically run at APC_LEVEL or PASSIVE_LEVEL, depending on the operation and the file system. This is important because many operations (like allocating paged pool or calling pageable functions) are not safe at DISPATCH_LEVEL or above.
The kernel driver can do far more than just register callbacks. It can actively scan the game process’s memory and the system-wide memory pool for artifacts of cheats.
As covered in the ObRegisterCallbacks section, the primary mechanism for protecting game memory from external reads and writes is stripping PROCESS_VM_READ and PROCESS_VM_WRITE from handles opened to the game process. This is effective against any cheat that uses standard Win32 APIs (ReadProcessMemory, WriteProcessMemory) because these ultimately call NtReadVirtualMemory and NtWriteVirtualMemory, which require appropriate handle access rights.
However, a kernel-mode cheat can bypass this entirely. It can call MmCopyVirtualMemory directly (an unexported but locatable kernel function) or manipulate page table entries directly to access game memory without going through the handle-based access control system. This is why handle protection alone is insufficient and why kernel-level cheats require kernel-level anti-cheat responses.
Anti-cheats periodically hash the code sections (.text sections) of the game executable and its core DLLs. A baseline hash is computed at game start, and periodic re-hashes are compared against the baseline. If the hash changes, someone has written to game code, which is a strong indicator of code patching (commonly used to enable no-recoil, speed, or aimbot functionality by patching game logic).
<table><tbody><tr><td><pre>1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 </pre></td><td><pre><span>// Pseudocode for code section integrity checking</span> <span>BOOLEAN</span> <span>VerifyCodeSectionIntegrity</span><span>(</span><span>PEPROCESS</span> <span>Process</span><span>,</span> <span>PVOID</span> <span>ModuleBase</span><span>)</span> <span>{</span> <span>// Attach to process context to read its memory</span> <span>KAPC_STATE</span> <span>apcState</span><span>;</span> <span>KeStackAttachProcess</span><span>(</span><span>Process</span><span>,</span> <span>&</span><span>apcState</span><span>);</span> <span>// Parse PE headers to find .text section</span> <span>PIMAGE_NT_HEADERS</span> <span>ntHeaders</span> <span>=</span> <span>RtlImageNtHeader</span><span>(</span><span>ModuleBase</span><span>);</span> <span>PIMAGE_SECTION_HEADER</span> <span>section</span> <span>=</span> <span>IMAGE_FIRST_SECTION</span><span>(</span><span>ntHeaders</span><span>);</span> <span>for</span> <span>(</span><span>USHORT</span> <span>i</span> <span>=</span> <span>0</span><span>;</span> <span>i</span> <span><</span> <span>ntHeaders</span><span>-></span><span>FileHeader</span><span>.</span><span>NumberOfSections</span><span>;</span> <span>i</span><span>++</span><span>,</span> <span>section</span><span>++</span><span>)</span> <span>{</span> <span>if</span> <span>(</span><span>memcmp</span><span>(</span><span>section</span><span>-></span><span>Name</span><span>,</span> <span>".text"</span><span>,</span> <span>5</span><span>)</span> <span>==</span> <span>0</span><span>)</span> <span>{</span> <span>PVOID</span> <span>sectionBase</span> <span>=</span> <span>(</span><span>PVOID</span><span>)((</span><span>ULONG_PTR</span><span>)</span><span>ModuleBase</span> <span>+</span> <span>section</span><span>-></span><span>VirtualAddress</span><span>);</span> <span>ULONG</span> <span>sectionSize</span> <span>=</span> <span>section</span><span>-></span><span>Misc</span><span>.</span><span>VirtualSize</span><span>;</span> <span>// Compute hash of current code section contents</span> <span>UCHAR</span> <span>currentHash</span><span>[</span><span>32</span><span>];</span> <span>ComputeSHA256</span><span>(</span><span>sectionBase</span><span>,</span> <span>sectionSize</span><span>,</span> <span>currentHash</span><span>);</span> <span>// Compare against stored baseline hash</span> <span>if</span> <span>(</span><span>memcmp</span><span>(</span><span>currentHash</span><span>,</span> <span>gBaselineHash</span><span>,</span> <span>32</span><span>)</span> <span>!=</span> <span>0</span><span>)</span> <span>{</span> <span>KeUnstackDetachProcess</span><span>(</span><span>&</span><span>apcState</span><span>);</span> <span>return</span> <span>FALSE</span><span>;</span> <span>// Code modification detected</span> <span>}</span> <span>}</span> <span>}</span> <span>KeUnstackDetachProcess</span><span>(</span><span>&</span><span>apcState</span><span>);</span> <span>return</span> <span>TRUE</span><span>;</span> <span>}</span> </pre></td></tr></tbody></table>
The KeStackAttachProcess / KeUnstackDetachProcess pattern is used to temporarily attach the calling thread to the target process’s address space, allowing the driver to read memory that is mapped into the game process without going through handle-based access controls. RtlImageNtHeader parses the PE headers from the in-memory image base.
The most interesting memory scanning is the heuristic detection of manually mapped code. When a legitimate DLL loads, it appears in the process’s PEB module list, in the InLoadOrderModuleList, and has a corresponding VAD_NODE entry with a MemoryAreaType that indicates the mapping came from a file. Manual mapping bypasses the normal loader, so the mapped code appears in memory as an anonymous private mapping or as a file-backed mapping with suspicious characteristics.
The key heuristic is: find all executable memory regions in the process, then cross-reference each one against the list of loaded modules. Executable memory that does not correspond to any loaded module is suspicious.
<table><tbody><tr><td><pre>1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 </pre></td><td><pre><span>// Walk the VAD tree to find executable anonymous mappings</span> <span>VOID</span> <span>ScanForManuallyMappedCode</span><span>(</span><span>PEPROCESS</span> <span>Process</span><span>)</span> <span>{</span> <span>KAPC_STATE</span> <span>apcState</span><span>;</span> <span>KeStackAttachProcess</span><span>(</span><span>Process</span><span>,</span> <span>&</span><span>apcState</span><span>);</span> <span>PVOID</span> <span>baseAddress</span> <span>=</span> <span>NULL</span><span>;</span> <span>MEMORY_BASIC_INFORMATION</span> <span>mbi</span><span>;</span> <span>while</span> <span>(</span><span>NT_SUCCESS</span><span>(</span><span>ZwQueryVirtualMemory</span><span>(</span> <span>ZwCurrentProcess</span><span>(),</span> <span>baseAddress</span><span>,</span> <span>MemoryBasicInformation</span><span>,</span> <span>&</span><span>mbi</span><span>,</span> <span>sizeof</span><span>(</span><span>mbi</span><span>),</span> <span>NULL</span><span>)))</span> <span>{</span> <span>if</span> <span>(</span><span>mbi</span><span>.</span><span>State</span> <span>==</span> <span>MEM_COMMIT</span> <span>&&</span> <span>(</span><span>mbi</span><span>.</span><span>Protect</span> <span>&</span> <span>PAGE_EXECUTE_READ</span> <span>||</span> <span>mbi</span><span>.</span><span>Protect</span> <span>&</span> <span>PAGE_EXECUTE_READWRITE</span> <span>||</span> <span>mbi</span><span>.</span><span>Protect</span> <span>&</span> <span>PAGE_EXECUTE_WRITECOPY</span><span>)</span> <span>&&</span> <span>mbi</span><span>.</span><span>Type</span> <span>==</span> <span>MEM_PRIVATE</span><span>)</span> <span>// Private, not file-backed</span> <span>{</span> <span>// Executable private memory - not associated with any file mapping</span> <span>// This is a strong indicator of manually mapped or shellcode</span> <span>ReportSuspiciousRegion</span><span>(</span><span>mbi</span><span>.</span><span>BaseAddress</span><span>,</span> <span>mbi</span><span>.</span><span>RegionSize</span><span>,</span> <span>"Executable private memory without file backing"</span><span>);</span> <span>}</span> <span>baseAddress</span> <span>=</span> <span>(</span><span>PVOID</span><span>)((</span><span>ULONG_PTR</span><span>)</span><span>mbi</span><span>.</span><span>BaseAddress</span> <span>+</span> <span>mbi</span><span>.</span><span>RegionSize</span><span>);</span> <span>if</span> <span>((</span><span>ULONG_PTR</span><span>)</span><span>baseAddress</span> <span>>=</span> <span>0x7FFFFFFFFFFF</span><span>)</span> <span>break</span><span>;</span> <span>// User space limit</span> <span>}</span> <span>KeUnstackDetachProcess</span><span>(</span><span>&</span><span>apcState</span><span>);</span> <span>}</span> </pre></td></tr></tbody></table>
ZwQueryVirtualMemory iterates through committed memory regions, returning a MEMORY_BASIC_INFORMATION structure for each. The Type field distinguishes private allocations (MEM_PRIVATE) from file-backed mappings (MEM_IMAGE, MEM_MAPPED). BattlEye’s scanning approach, as documented by the secret.club and back.engineering analyses, involves scanning all memory regions of the protected process and specifically flagging executable regions without file backing. It also scans external processes’ memory pages looking for execution bit anomalies, specifically targeting cases where page protection flags have been changed programmatically to make otherwise non-executable memory executable (a common technique when shellcode is staged).
The VAD (Virtual Address Descriptor) tree is a kernel-internal structure that the memory manager uses to track all memory regions allocated in a process. Each VAD_NODE (which is actually a MMVAD structure in kernel terms) contains information about the region: its base address and size, its protection, whether it is file-backed (and if so, which file), and various flags.
Anti-cheats walk the VAD tree directly rather than relying on ZwQueryVirtualMemory, because the VAD tree cannot be trivially hidden from kernel mode in the same way that module lists can be manipulated. Walking the VAD:
<table><tbody><tr><td><pre>1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 </pre></td><td><pre><span>// Simplified VAD walker - actual offsets are version-specific</span> <span>VOID</span> <span>WalkVAD</span><span>(</span><span>PEPROCESS</span> <span>Process</span><span>)</span> <span>{</span> <span>// VadRoot is at a version-specific offset in EPROCESS</span> <span>// On Windows 11 23H2, this is at EPROCESS+0x7D8 (https://www.vergiliusproject.com/kernels/x64/windows-11/23h2/_EPROCESS)</span> <span>PMM_AVL_TABLE</span> <span>vadRoot</span> <span>=</span> <span>(</span><span>PMM_AVL_TABLE</span><span>)((</span><span>ULONG_PTR</span><span>)</span><span>Process</span> <span>+</span> <span>EPROCESS_VAD_ROOT_OFFSET</span><span>);</span> <span>WalkAVLTree</span><span>(</span><span>vadRoot</span><span>-></span><span>BalancedRoot</span><span>.</span><span>RightChild</span><span>);</span> <span>}</span> <span>VOID</span> <span>WalkAVLTree</span><span>(</span><span>PMMADDRESS_NODE</span> <span>node</span><span>)</span> <span>{</span> <span>if</span> <span>(</span><span>node</span> <span>==</span> <span>NULL</span><span>)</span> <span>return</span><span>;</span> <span>PMMVAD</span> <span>vad</span> <span>=</span> <span>(</span><span>PMMVAD</span><span>)</span><span>node</span><span>;</span> <span>// Check the VAD flags for suspicious characteristics</span> <span>// u.VadFlags.PrivateMemory = 1 and executable protection = suspicious</span> <span>if</span> <span>(</span><span>vad</span><span>-></span><span>u</span><span>.</span><span>VadFlags</span><span>.</span><span>PrivateMemory</span> <span>&&</span> <span>IsExecutableProtection</span><span>(</span><span>vad</span><span>-></span><span>u</span><span>.</span><span>VadFlags</span><span>.</span><span>Protection</span><span>))</span> <span>{</span> <span>// Check for file-backed backing</span> <span>if</span> <span>(</span><span>vad</span><span>-></span><span>Subsection</span> <span>==</span> <span>NULL</span><span>)</span> <span>{</span> <span>ReportSuspiciousVAD</span><span>(</span><span>vad</span><span>);</span> <span>}</span> <span>}</span> <span>WalkAVLTree</span><span>(</span><span>node</span><span>-></span><span>LeftChild</span><span>);</span> <span>WalkAVLTree</span><span>(</span><span>node</span><span>-></span><span>RightChild</span><span>);</span> <span>}</span> </pre></td></tr></tbody></table>
We can observe this detection in practice using WinDbg’s !vad command on a process with injected code.
The first entry is a Private EXECUTE_READWRITE region with no backing file, injected by our test tool. Every legitimate module shows as Mapped Exe with a full file path.
The power of VAD walking is that it catches manually mapped code even if the cheat has manipulated the PEB module list or the LDR_DATA_TABLE_ENTRY chain to hide itself. The VAD is a kernel structure that usermode code cannot modify directly.
The classic injection technique: call CreateRemoteThread in the target process with LoadLibraryA as the thread start address and the DLL path as the argument. This is trivially detectable via PsSetCreateThreadNotifyRoutine: the new thread’s start address will be LoadLibraryA (or rather its address in kernel32.dll), and the caller process is not the game itself.
A more subtle check is the CLIENT_ID of the creating thread. When CreateRemoteThread is called, the kernel records which process created the thread. The anti-cheat can check whether a thread in the game process was created by an external process, which is a reliable indicator of injection.
QueueUserAPC and the underlying NtQueueApcThread allow queuing an Asynchronous Procedure Call to a thread in any process for which the caller has THREAD_SET_CONTEXT access. When the target thread enters an alertable wait, the APC fires and executes arbitrary code in the target thread’s context.
Detection at the kernel level leverages the KAPC structure. Each thread has a kernel APC queue and a user APC queue. Anti-cheats can inspect the pending APC queue of game process threads to detect suspicious APC targets:
<table><tbody><tr><td><pre>1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 </pre></td><td><pre><span>// Check for suspicious pending APCs on a thread</span> <span>VOID</span> <span>InspectThreadAPCQueue</span><span>(</span><span>PETHREAD</span> <span>Thread</span><span>)</span> <span>{</span> <span>// The user APC queue is at a version-specific offset in ETHREAD</span> <span>// ETHREAD::ApcState::ApcListHead[1] (index 1 = user APC list)</span> <span>PLIST_ENTRY</span> <span>apcList</span> <span>=</span> <span>(</span><span>PLIST_ENTRY</span><span>)((</span><span>ULONG_PTR</span><span>)</span><span>Thread</span> <span>+</span> <span>ETHREAD_APC_STATE_OFFSET</span> <span>+</span> <span>KAPC_STATE_USER_APC_LIST_OFFSET</span><span>);</span> <span>PLIST_ENTRY</span> <span>entry</span> <span>=</span> <span>apcList</span><span>-></span><span>Flink</span><span>;</span> <span>while</span> <span>(</span><span>entry</span> <span>!=</span> <span>apcList</span><span>)</span> <span>{</span> <span>PKAPC</span> <span>apc</span> <span>=</span> <span>CONTAINING_RECORD</span><span>(</span><span>entry</span><span>,</span> <span>KAPC</span><span>,</span> <span>ApcListEntry</span><span>);</span> <span>// Check if the normal routine (user APC function)</span> <span>// points to an address without module backing</span> <span>if</span> <span>(</span><span>apc</span><span>-></span><span>NormalRoutine</span> <span>!=</span> <span>NULL</span> <span>&&</span> <span>!</span><span>IsAddressInLoadedModule</span><span>((</span><span>PVOID</span><span>)</span><span>apc</span><span>-></span><span>NormalRoutine</span><span>))</span> <span>{</span> <span>ReportSuspiciousAPC</span><span>(</span><span>Thread</span><span>,</span> <span>apc</span><span>-></span><span>NormalRoutine</span><span>);</span> <span>}</span> <span>entry</span> <span>=</span> <span>entry</span><span>-></span><span>Flink</span><span>;</span> <span>}</span> <span>}</span> </pre></td></tr></tbody></table>
A sophisticated injection technique maps a shared section object (backed by a file or created with NtCreateSection) into the target process using NtMapViewOfSection. This bypasses CreateRemoteThread-based detection heuristics because no remote thread is created initially. The injected code is then typically triggered via APC or by modifying an existing thread’s context.
Detection is via the VAD: a section mapping that appears in the game process but was created by an external process will have a distinct pattern in the VAD. Specifically, the MMVAD::u.VadFlags.NoChange and related flags, combined with the file object backing the section (or lack thereof), can reveal this technique.
Reflective DLL injection embeds a reflective loader inside the DLL that, when executed, maps the DLL into memory without using LoadLibrary. The DLL parses its own PE headers, resolves imports, applies relocations, and calls DllMain. The result is a fully functional DLL in memory that never appears in the InLoadOrderModuleList.
Detection: executable memory with a valid PE header (check for the MZ magic bytes and the PE\0\0 signature at the offset specified by e_lfanew) but no corresponding module list entry. This is a reliable indicator.
<table><tbody><tr><td><pre>1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 </pre></td><td><pre><span>// Check for PE headers in executable private memory</span> <span>BOOLEAN</span> <span>HasValidPEHeader</span><span>(</span><span>PVOID</span> <span>base</span><span>,</span> <span>SIZE_T</span> <span>size</span><span>)</span> <span>{</span> <span>if</span> <span>(</span><span>size</span> <span><</span> <span>sizeof</span><span>(</span><span>IMAGE_DOS_HEADER</span><span>))</span> <span>return</span> <span>FALSE</span><span>;</span> <span>PIMAGE_DOS_HEADER</span> <span>dosHeader</span> <span>=</span> <span>(</span><span>PIMAGE_DOS_HEADER</span><span>)</span><span>base</span><span>;</span> <span>if</span> <span>(</span><span>dosHeader</span><span>-></span><span>e_magic</span> <span>!=</span> <span>IMAGE_DOS_SIGNATURE</span><span>)</span> <span>return</span> <span>FALSE</span><span>;</span> <span>if</span> <span>(</span><span>dosHeader</span><span>-></span><span>e_lfanew</span> <span>>=</span> <span>size</span> <span>-</span> <span>sizeof</span><span>(</span><span>IMAGE_NT_HEADERS</span><span>))</span> <span>return</span> <span>FALSE</span><span>;</span> <span>PIMAGE_NT_HEADERS</span> <span>ntHeaders</span> <span>=</span> <span>(</span><span>PIMAGE_NT_HEADERS</span><span>)</span> <span>((</span><span>ULONG_PTR</span><span>)</span><span>base</span> <span>+</span> <span>dosHeader</span><span>-></span><span>e_lfanew</span><span>);</span> <span>if</span> <span>(</span><span>ntHeaders</span><span>-></span><span>Signature</span> <span>!=</span> <span>IMAGE_NT_SIGNATURE</span><span>)</span> <span>return</span> <span>FALSE</span><span>;</span> <span>return</span> <span>TRUE</span><span>;</span> <span>}</span> </pre></td></tr></tbody></table>
We can observe this in practice using a simple test tool that allocates an RWX region and writes a minimal PE header into a running process:
Walking the VAD tree with !vad reveals the injected region immediately. The first entry at 0x8A0 is a Private EXECUTE_READWRITE region with no backing file. Compare this to the legitimate Target.exe image at the bottom, which is Mapped Exe EXECUTE_WRITECOPY with a full file path. Dumping the legitimate module’s base with db confirms a complete PE header with the DOS stub:
Dumping the injected region at 0x008A0000 also shows a valid MZ signature, but the rest of the header is mostly zeroes with no DOS stub. This is characteristic of manually mapped code:
Finally, !peb confirms that the injected region does not appear in any of the module lists. The PEB only contains Target.exe, ntdll.dll, kernel32.dll, and KernelBase.dll. The region at 0x008A0000 is completely invisible to any usermode API that enumerates loaded modules:
When BEDaisy wants to inspect a thread’s call stack, it uses an APC mechanism to capture the stack frames while the thread is in user mode. The APC fires in the game thread’s context and calls RtlWalkFrameChain or RtlCaptureStackBackTrace to capture the return address chain.
The back.engineering analysis of BEDaisy (and the Aki2k/BEDaisy GitHub research) documents this specifically: BEDaisy queues kernel APCs to threads in the protected process. The APC kernel routine runs at APC_LEVEL, captures the thread’s stack, and then analyzes each return address against the list of loaded modules. A return address pointing outside any loaded module is a strong indicator of injected code on the stack, which suggests the thread is currently executing injected code or returned from it.
<table><tbody><tr><td><pre>1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 </pre></td><td><pre><span>// Pseudocode for APC-based stack inspection</span> <span>VOID</span> <span>KernelApcRoutine</span><span>(</span> <span>PKAPC</span> <span>Apc</span><span>,</span> <span>PKNORMAL_ROUTINE</span> <span>*</span><span>NormalRoutine</span><span>,</span> <span>PVOID</span> <span>*</span><span>NormalContext</span><span>,</span> <span>PVOID</span> <span>*</span><span>SystemArgument1</span><span>,</span> <span>PVOID</span> <span>*</span><span>SystemArgument2</span> <span>)</span> <span>{</span> <span>// We're now running at APC_LEVEL in the context of the inspected thread</span> <span>PVOID</span> <span>frames</span><span>[</span><span>64</span><span>];</span> <span>ULONG</span> <span>capturedFrames</span> <span>=</span> <span>RtlWalkFrameChain</span><span>(</span><span>frames</span><span>,</span> <span>64</span><span>,</span> <span>0</span><span>);</span> <span>for</span> <span>(</span><span>ULONG</span> <span>i</span> <span>=</span> <span>0</span><span>;</span> <span>i</span> <span><</span> <span>capturedFrames</span><span>;</span> <span>i</span><span>++</span><span>)</span> <span>{</span> <span>if</span> <span>(</span><span>!</span><span>IsAddressInKnownModule</span><span>(</span><span>frames</span><span>[</span><span>i</span><span>]))</span> <span>{</span> <span>// Stack frame points outside any loaded module</span> <span>ReportSuspiciousStackFrame</span><span>(</span><span>frames</span><span>[</span><span>i</span><span>]);</span> <span>}</span> <span>}</span> <span>}</span> </pre></td></tr></tbody></table>
Hooks are the primary mechanism by which usermode cheats intercept and manipulate the game’s interaction with the OS. Detecting them is a core anti-cheat function.
The Import Address Table (IAT) of a PE file contains the addresses of imported functions. When a process loads, the loader resolves these addresses by looking up each imported function in the exporting DLL and writing the function’s address into the IAT. An IAT hook overwrites one of these entries with a pointer to attacker-controlled code.
Detection is straightforward: for each IAT entry, compare the resolved address against what the on-disk export of the correct DLL says the address should be.
<table><tbody><tr><td><pre>1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 </pre></td><td><pre><span>VOID</span> <span>DetectIATHooks</span><span>(</span><span>PVOID</span> <span>moduleBase</span><span>)</span> <span>{</span> <span>PIMAGE_IMPORT_DESCRIPTOR</span> <span>importDesc</span> <span>=</span> <span>RtlImageDirectoryEntryToData</span><span>(</span><span>moduleBase</span><span>,</span> <span>TRUE</span><span>,</span> <span>IMAGE_DIRECTORY_ENTRY_IMPORT</span><span>,</span> <span>NULL</span><span>);</span> <span>while</span> <span>(</span><span>importDesc</span><span>-></span><span>Name</span> <span>!=</span> <span>0</span><span>)</span> <span>{</span> <span>PCHAR</span> <span>dllName</span> <span>=</span> <span>(</span><span>PCHAR</span><span>)((</span><span>ULONG_PTR</span><span>)</span><span>moduleBase</span> <span>+</span> <span>importDesc</span><span>-></span><span>Name</span><span>);</span> <span>PVOID</span> <span>importedDllBase</span> <span>=</span> <span>GetLoadedModuleBase</span><span>(</span><span>dllName</span><span>);</span> <span>PULONG_PTR</span> <span>iat</span> <span>=</span> <span>(</span><span>PULONG_PTR</span><span>)((</span><span>ULONG_PTR</span><span>)</span><span>moduleBase</span> <span>+</span> <span>importDesc</span><span>-></span><span>FirstThunk</span><span>);</span> <span>PIMAGE_THUNK_DATA</span> <span>originalFirstThunk</span> <span>=</span> <span>(</span><span>PIMAGE_THUNK_DATA</span><span>)((</span><span>ULONG_PTR</span><span>)</span><span>moduleBase</span> <span>+</span> <span>importDesc</span><span>-></span><span>OriginalFirstThunk</span><span>);</span> <span>while</span> <span>(</span><span>originalFirstThunk</span><span>-></span><span>u1</span><span>.</span><span>AddressOfData</span> <span>!=</span> <span>0</span><span>)</span> <span>{</span> <span>PIMAGE_IMPORT_BY_NAME</span> <span>importByName</span> <span>=</span> <span>(</span><span>PIMAGE_IMPORT_BY_NAME</span><span>)((</span><span>ULONG_PTR</span><span>)</span><span>moduleBase</span> <span>+</span> <span>originalFirstThunk</span><span>-></span><span>u1</span><span>.</span><span>AddressOfData</span><span>);</span> <span>// Get the expected address from the exporting DLL</span> <span>PVOID</span> <span>expectedAddr</span> <span>=</span> <span>GetExportedFunctionAddress</span><span>(</span><span>importedDllBase</span><span>,</span> <span>importByName</span><span>-></span><span>Name</span><span>);</span> <span>PVOID</span> <span>actualAddr</span> <span>=</span> <span>(</span><span>PVOID</span><span>)</span><span>*</span><span>iat</span><span>;</span> <span>if</span> <span>(</span><span>expectedAddr</span> <span>!=</span> <span>actualAddr</span><span>)</span> <span>{</span> <span>ReportIATHook</span><span>(</span><span>dllName</span><span>,</span> <span>importByName</span><span>-></span><span>Name</span><span>,</span> <span>expectedAddr</span><span>,</span> <span>actualAddr</span><span>);</span> <span>}</span> <span>iat</span><span>++</span><span>;</span> <span>originalFirstThunk</span><span>++</span><span>;</span> <span>}</span> <span>importDesc</span><span>++</span><span>;</span> <span>}</span> <span>}</span> </pre></td></tr></tbody></table>
RtlImageDirectoryEntryToData locates the import directory from the PE headers. The TRUE parameter specifies that the image is mapped (as opposed to a raw file on disk), which is correct when working with in-memory modules. The outer loop walks the IMAGE_IMPORT_DESCRIPTOR array, terminating on a zero Name field. The inner loop compares each resolved IAT entry against the expected export address.
Inline hooks patch the first few bytes of a function with a JMP (opcode 0xE9 for relative near jump, or 0xFF 0x25 for indirect jump through a memory pointer) to redirect execution to attacker code, which typically performs its modifications and then jumps back to the original code (a “trampoline” pattern).
Detection involves reading the first 16-32 bytes of each monitored function and checking for:
0xE9 (JMP rel32)0xFF 0x25 (JMP [rip+disp32]) - common for 64-bit hooks0x48 0xB8 ... 0xFF 0xE0 (MOV RAX, imm64; JMP RAX) - an absolute 64-bit jump sequence0xCC (INT 3) - a software breakpoint, which can also be a hook pointThe anti-cheat reads the on-disk PE file and compares the on-disk bytes of function prologues against what is currently in memory. Any discrepancy indicates patching.
To demonstrate inline hook detection, we use a test tool that patches NtReadVirtualMemory in a running process with a MOV RAX; JMP RAX hook:
Before patching, the function prologue shows a clean syscall stub. mov r10, rcx saves the first argument, mov eax, 3Fh loads the syscall number, and syscall transitions to kernel mode:
After the hook is installed, the first 12 bytes are overwritten with mov rax, 0xDEADBEEFCAFEBABE; jmp rax, redirecting execution to an attacker-controlled address. An anti-cheat comparing these bytes against the on-disk copy of ntdll would immediately flag the mismatch:
The System Service Descriptor Table (SSDT) is the kernel’s dispatch table for syscalls. When a usermode process executes a syscall instruction, the kernel uses the syscall number (placed in EAX) to index into the SSDT and invoke the corresponding kernel function. Patching the SSDT redirects syscalls to attacker-controlled code.
SSDT hooking is a classic technique that became significantly harder after the introduction of PatchGuard (Kernel Patch Protection, KPP) in 64-bit Windows. PatchGuard monitors the SSDT (among many other structures) and triggers a CRITICAL_STRUCTURE_CORRUPTION bug check (0x109) if it detects modification. As a result, SSDT hooking is essentially dead in 64-bit Windows. However, anti-cheats still verify SSDT integrity as a defense in depth measure.
The Interrupt Descriptor Table (IDT) maps interrupt vectors to their handler routines. The Global Descriptor Table (GDT) defines memory segments. Both are processor-level structures that cannot be easily protected by PatchGuard alone on all configurations.
A cheat operating at kernel level can attempt to replace IDT entries to intercept specific interrupts, which can be used for control flow interception or as a covert channel. Anti-cheats verify that IDT entries point to expected kernel locations:
<table><tbody><tr><td><pre>1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 </pre></td><td><pre><span>VOID</span> <span>VerifyIDTIntegrity</span><span>(</span><span>void</span><span>)</span> <span>{</span> <span>IDTR</span> <span>idtr</span><span>;</span> <span>__sidt</span><span>(</span><span>&</span><span>idtr</span><span>);</span> <span>// Read IDTR register</span> <span>PIDT_ENTRY64</span> <span>idt</span> <span>=</span> <span>(</span><span>PIDT_ENTRY64</span><span>)</span><span>idtr</span><span>.</span><span>Base</span><span>;</span> <span>for</span> <span>(</span><span>int</span> <span>i</span> <span>=</span> <span>0</span><span>;</span> <span>i</span> <span><</span> <span>256</span><span>;</span> <span>i</span><span>++</span><span>)</span> <span>{</span> <span>ULONG_PTR</span> <span>handler</span> <span>=</span> <span>GetIDTHandlerAddress</span><span>(</span><span>&</span><span>idt</span><span>[</span><span>i</span><span>]);</span> <span>// Verify handler is in ntoskrnl or a known driver address range</span> <span>if</span> <span>(</span><span>!</span><span>IsKernelCodeAddress</span><span>(</span><span>handler</span><span>))</span> <span>{</span> <span>ReportIDTModification</span><span>(</span><span>i</span><span>,</span> <span>handler</span><span>);</span> <span>}</span> <span>}</span> <span>}</span> </pre></td></tr></tbody></table>
A common evasion technique is for cheats to perform syscalls directly (using the syscall instruction with the appropriate syscall number) rather than going through ntdll.dll functions. This bypasses usermode hooks placed in ntdll. Anti-cheats detect this by monitoring threads within the game process for syscall instruction execution from unexpected code locations, and by checking whether ntdll functions that should be called are actually being called with expected frequency and patterns.
On a properly configured Windows system with Secure Boot enabled, all kernel drivers must be signed by a certificate trusted by Microsoft. Test signing mode (enabled with bcdedit /set testsigning on) allows loading self-signed drivers and is a common development and cheat-deployment technique.
Anti-cheats detect test signing mode by reading the Windows boot configuration and by checking the kernel variable that reflects whether DSE is currently enforced. Some anti-cheats refuse to launch if test signing is enabled.
The SeValidateImageHeader and SeValidateImageData functions in the kernel validate driver signatures. Anti-cheats can inspect loaded driver objects and verify that their IMAGE_INFO_EX ImageSignatureType and ImageSignatureLevel fields reflect proper signing.
Bring Your Own Vulnerable Driver is the dominant technique for loading unsigned kernel code in 2024-2026. The attack works as follows:
MmMapIoSpace with attacker-controlled parameters).Common BYOVD targets have included drivers from MSI, Gigabyte, ASUS, and various hardware vendors. These drivers often have IOCTL handlers that expose direct physical memory read/write capability, which is all an attacker needs.
The primary defense against BYOVD is a blocklist of known-vulnerable drivers. The Microsoft Vulnerable Driver Blocklist (maintained in DriverSiPolicy.p7b) is built into Windows and distributed via Windows Update. Anti-cheats maintain their own, more aggressive blocklists.
Vanguard in particular is known for actively comparing the set of loaded drivers against its blocklist and refusing to allow the protected game to launch if a blocklisted driver is present. This is enforced because some BYOVD attacks involve loading the vulnerable driver and immediately using it before unloading it, so checking only at game launch with a pre-scan covers most cases.
This is one of the more interesting internals that kernel cheat developers and anti-cheat engineers both care deeply about.
PiDDBCacheTable is a kernel-internal AVL tree that caches information about previously loaded drivers. When a driver is loaded, the kernel stores an entry keyed by the driver’s TimeDateStamp (from the PE header) and SizeOfImage. This cache is used to quickly look up whether a driver has been seen before. The structure is a RTL_AVL_TABLE protected by PiDDBLock (an ERESOURCE lock).
Cheat developers who manually map a driver without going through the normal load path try to erase or modify the corresponding PiDDBCacheTable entry to conceal that their driver was ever loaded. Anti-cheats detect this by:
PiDDBCacheTable - if a driver is in memory (found via pool tag scanning or other means) but has no PiDDBCacheTable entry, the entry was probably scrubbed.PiDDBLock for unexpected acquisitions from non-kernel threads.PiDDBCacheTable entries.<table><tbody><tr><td><pre>1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 </pre></td><td><pre><span>// Locating PiDDBCacheTable (must locate via signature scan since not exported)</span> <span>// This is version-specific and fragile; anti-cheats maintain multiple signatures</span> <span>BOOLEAN</span> <span>FindPiDDBCacheTable</span><span>(</span><span>PVOID</span> <span>*</span><span>TableAddress</span><span>)</span> <span>{</span> <span>// Pattern to locate PiDDBCacheTable in ntoskrnl</span> <span>// This is a simplified illustration - real implementations use robust pattern matching</span> <span>PVOID</span> <span>ntoskrnl</span> <span>=</span> <span>GetKernelModuleBase</span><span>(</span><span>"ntoskrnl.exe"</span><span>);</span> <span>PUCHAR</span> <span>pattern</span> <span>=</span> <span>"</span><span>\x48\x8D\x0D</span><span>"</span><span>;</span> <span>// LEA RCX, [RIP+...]</span> <span>PVOID</span> <span>match</span> <span>=</span> <span>FindPattern</span><span>(</span><span>ntoskrnl</span><span>,</span> <span>pattern</span><span>,</span> <span>3</span><span>,</span> <span>NTOSKRNL_TEXT_RANGE</span><span>);</span> <span>if</span> <span>(</span><span>match</span><span>)</span> <span>{</span> <span>// Extract the RIP-relative offset</span> <span>INT32</span> <span>offset</span> <span>=</span> <span>*</span><span>(</span><span>INT32</span><span>*</span><span>)((</span><span>ULONG_PTR</span><span>)</span><span>match</span> <span>+</span> <span>3</span><span>);</span> <span>*</span><span>TableAddress</span> <span>=</span> <span>(</span><span>PVOID</span><span>)((</span><span>ULONG_PTR</span><span>)</span><span>match</span> <span>+</span> <span>7</span> <span>+</span> <span>offset</span><span>);</span> <span>return</span> <span>TRUE</span><span>;</span> <span>}</span> <span>return</span> <span>FALSE</span><span>;</span> <span>}</span> </pre></td></tr></tbody></table>
PiDDBCacheTable is not exported and PiDDBCacheEntry is not in public symbols. To interact with the cache, we need to reverse the entry structure. The compare routine is the best starting point since it directly accesses the fields used for ordering.
The decompiled output reveals the structure layout. The function receives two PiDDBCacheEntry pointers and first compares their DriverName fields (a UNICODE_STRING at offset 0x10) using RtlCompareUnicodeString. If the names are equal and TableContext is non-zero, the entries are considered equal. Otherwise, it falls through to comparing the TimeDateStamp field (a ULONG at offset 0x20). This gives us the recovered structure:
<table><tbody><tr><td><pre>1 2 3 4 5 6 </pre></td><td><pre><span>struct</span> <span>PiDDBCacheEntry</span> <span>{</span> <span>RTL_BALANCED_LINKS</span> <span>Links</span><span>;</span> <span>// 0x00 - AVL tree node pointers (0x20 bytes)</span> <span>UNICODE_STRING</span> <span>DriverName</span><span>;</span> <span>// 0x10 - driver filename (from compare routine offset)</span> <span>ULONG</span> <span>TimeDateStamp</span><span>;</span> <span>// 0x20 - PE header timestamp (secondary sort key)</span> <span>};</span> </pre></td></tr></tbody></table>
PiDDBCacheTable is an RTL_AVL_TABLE, a self-balancing binary search tree. Each node in the tree starts with an _RTL_BALANCED_LINKS header containing Parent, LeftChild, and RightChild pointers. The actual PiDDBCacheEntry data sits immediately after this header.
To enumerate entries, we start by resolving the table address. PiDDBCacheTable is not exported, so anti-cheats locate it via signature scanning in ntoskrnl.exe. In WinDbg with symbols loaded, we can resolve it directly:
The table contains 151 cached driver entries with a tree depth of 9. The CompareRoutine points to PiCompareDDBCacheEntries, confirming this is the right table. The BalancedRoot is the entry point into the tree. Its RightChild gives us the first real node:
From each node, the entry data starts at offset 0x20 (past the _RTL_BALANCED_LINKS header). Adding our recovered offsets: DriverName at node+0x30, TimeDateStamp at node+0x40. Following the LeftChild and RightChild pointers lets us walk the entire tree:
A cheat developer who manually maps a kernel driver would try to find and remove their entry from this tree to avoid detection. An anti-cheat that detects a driver in memory (via pool tag scanning or other means) but finds no corresponding PiDDBCacheTable entry knows the entry was scrubbed.
MmUnloadedDrivers is a kernel array (also not exported) that maintains a circular buffer of the last 50 unloaded drivers, storing their name, start address, end address, and unload timestamp. This structure allows debugging and forensics of driver activity.
Cheat developers who successfully load and then unload a kernel driver often try to zero out or corrupt their entry in MmUnloadedDrivers to hide traces. Anti-cheats detect this by:
MmUnloadedDrivers entries.MmUnloadedDrivers against other kernel timestamps and logs.When a kernel allocation exceeds approximately 4KB (more precisely, when it exceeds a threshold managed by the pool allocator), it is managed as a “big pool allocation” tracked in the PoolBigPageTable. Anti-cheats scan this table to find memory allocations that were made by manually mapped drivers. A manually mapped driver typically makes large allocations for its code and data sections; these show up in the big pool table with the allocation address but without a corresponding loaded driver.
The technique is to enumerate all big pool entries, then cross-reference each allocation’s address against the list of loaded driver address ranges. Allocations in no driver’s range that are the right size to be driver code sections are suspicious.
Anti-cheat code itself is a high-value target for reverse engineering. Reverse engineers analyzing the anti-cheat driver need to use kernel debuggers, which anti-cheats aggressively detect.
At the usermode level (in the game-injected DLL), the anti-cheat uses NtQueryInformationProcess with multiple information classes:
ProcessDebugPort (7): Returns a non-zero value if a debugger is attached via DebugActiveProcess. A kernel driver can spoof this by hooking NtQueryInformationProcess, but the check is done in the kernel driver itself as well.ProcessDebugObjectHandle (30): Returns a handle to the debug object if one exists.ProcessDebugFlags (31): The NoDebugInherit flag; checking for its inverse reveals debugger presence.The kernel driver checks the kernel-exported variables KdDebuggerEnabled and KdDebuggerNotPresent. On a system with WinDbg (or any kernel debugger) attached, KdDebuggerEnabled is TRUE and KdDebuggerNotPresent is FALSE.
<table><tbody><tr><td><pre>1 2 3 4 5 6 7 8 9 10 11 12 </pre></td><td><pre><span>BOOLEAN</span> <span>IsKernelDebuggerPresent</span><span>(</span><span>void</span><span>)</span> <span>{</span> <span>// KD_DEBUGGER_ENABLED is a kernel export</span> <span>if</span> <span>(</span><span>*</span><span>KdDebuggerEnabled</span> <span>&&</span> <span>!*</span><span>KdDebuggerNotPresent</span><span>)</span> <span>{</span> <span>return</span> <span>TRUE</span><span>;</span> <span>}</span> <span>// Additional check: attempt a debug break and see if it's handled</span> <span>// More sophisticated: check specific kernel structures</span> <span>return</span> <span>FALSE</span><span>;</span> <span>}</span> </pre></td></tr></tbody></table>
Some anti-cheats go further and directly inspect the KDDEBUGGER_DATA64 structure and the shared kernel data page (KUSER_SHARED_DATA) for debugger-related flags.
NtSetInformationThread with ThreadHideFromDebugger (17) sets a flag in the thread’s ETHREAD structure (CrossThreadFlags.HideFromDebugger). Once set, the kernel will not deliver debug events for that thread to any attached debugger. The thread becomes essentially invisible to WinDbg: breakpoints in the thread do not trigger debugger notification, exceptions are not forwarded.
Anti-cheats use this to protect their own threads. However, they also detect if cheats are using it to hide their own injected threads. The detection method is to enumerate all threads in the system via a kernel enumeration (not via usermode APIs that could be hooked) and check the HideFromDebugger bit in CrossThreadFlags for each thread. A hidden thread in the game process that the anti-cheat did not itself hide is a red flag.
<table><tbody><tr><td><pre>1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 </pre></td><td><pre><span>// Check CrossThreadFlags for HideFromDebugger</span> <span>#define PS_CROSS_THREAD_FLAGS_HIDEFROMDEBUGGER 0x4 </span> <span>VOID</span> <span>CheckThreadDebugVisibility</span><span>(</span><span>PETHREAD</span> <span>Thread</span><span>)</span> <span>{</span> <span>// CrossThreadFlags is at a version-specific offset in ETHREAD</span> <span>ULONG</span> <span>crossFlags</span> <span>=</span> <span>*</span><span>(</span><span>ULONG</span><span>*</span><span>)((</span><span>ULONG_PTR</span><span>)</span><span>Thread</span> <span>+</span> <span>ETHREAD_CROSS_THREAD_FLAGS_OFFSET</span><span>);</span> <span>if</span> <span>(</span><span>crossFlags</span> <span>&</span> <span>PS_CROSS_THREAD_FLAGS_HIDEFROMDEBUGGER</span><span>)</span> <span>{</span> <span>// Thread is hidden from debuggers</span> <span>// If we didn't hide it, flag it</span> <span>if</span> <span>(</span><span>!</span><span>IsAntiCheatOwnedThread</span><span>(</span><span>Thread</span><span>))</span> <span>{</span> <span>ReportHiddenThread</span><span>(</span><span>Thread</span><span>);</span> <span>}</span> <span>}</span> <span>}</span> </pre></td></tr></tbody></table>
To demonstrate this detection, we use a test tool that creates a remote thread in Target.exe and then sets ThreadHideFromDebugger on it:
In WinDbg, we convert the decimal TID to hex, locate the thread in the process, and inspect its CrossThreadFlags. Before setting the flag, the value is 0x5402 with bit 2 (HideFromDebugger) clear:
After calling NtSetInformationThread with ThreadHideFromDebugger, the value changes to 0x5406. Bit 2 is now set, making this thread invisible to any attached debugger:
An anti-cheat enumerating threads in the game process would check this bit on every thread. A hidden thread that the anti-cheat did not create itself is a strong indicator of injected cheat code.
Single-step debugging (via the TF flag in EFLAGS) and hardware breakpoints dramatically increase the time between instruction executions. Anti-cheats use RDTSC instruction-based timing to detect this:
<table><tbody><tr><td><pre>1 2 3 4 5 6 7 8 9 10 11 </pre></td><td><pre><span>UINT64</span> <span>before</span> <span>=</span> <span>__rdtsc</span><span>();</span> <span>// Execute a fixed number of operations</span> <span>volatile</span> <span>ULONG</span> <span>dummy</span> <span>=</span> <span>0</span><span>;</span> <span>for</span> <span>(</span><span>int</span> <span>i</span> <span>=</span> <span>0</span><span>;</span> <span>i</span> <span><</span> <span>1000</span><span>;</span> <span>i</span><span>++</span><span>)</span> <span>dummy</span> <span>+=</span> <span>i</span><span>;</span> <span>UINT64</span> <span>after</span> <span>=</span> <span>__rdtsc</span><span>();</span> <span>UINT64</span> <span>elapsed</span> <span>=</span> <span>after</span> <span>-</span> <span>before</span><span>;</span> <span>if</span> <span>(</span><span>elapsed</span> <span>></span> <span>EXPECTED_MAXIMUM_CYCLES</span><span>)</span> <span>{</span> <span>// Execution was slowed - likely single-stepping or a breakpoint</span> <span>ReportDebuggerDetected</span><span>();</span> <span>}</span> </pre></td></tr></tbody></table>
The threshold EXPECTED_MAXIMUM_CYCLES is calibrated based on known CPU behavior. Single-stepping can add thousands of cycles per instruction (due to debug exception handling), making the timing discrepancy obvious.
The x86-64 debug registers (DR0-DR3 for breakpoint addresses, DR6 for status, DR7 for control) are accessible in kernel mode. Reading them allows detection of hardware breakpoints set by a debugger:
<table><tbody><tr><td><pre>1 2 3 4 5 6 7 8 9 10 11 </pre></td><td><pre><span>BOOLEAN</span> <span>HasHardwareBreakpoints</span><span>(</span><span>void</span><span>)</span> <span>{</span> <span>ULONG_PTR</span> <span>dr7</span> <span>=</span> <span>__readdr</span><span>(</span><span>7</span><span>);</span> <span>// Read DR7 (debug control register)</span> <span>// Check Local Enable bits (L0, L1, L2, L3) for each breakpoint</span> <span>// Bits 0, 2, 4, 6 of DR7 are the local enable bits for BP 0-3</span> <span>if</span> <span>(</span><span>dr7</span> <span>&</span> <span>0x55</span><span>)</span> <span>{</span> <span>// 0x55 = 01010101b - all four local enable bits</span> <span>return</span> <span>TRUE</span><span>;</span> <span>}</span> <span>return</span> <span>FALSE</span><span>;</span> <span>}</span> </pre></td></tr></tbody></table>
Anti-cheats scan all threads’ saved debug register state (accessible via CONTEXT structure obtained with KeGetContextThread or directly from KTHREAD::TrapFrame) for active hardware breakpoints not set by the anti-cheat itself.
Type-1 hypervisor-based debuggers (like a custom hypervisor running a Windows VM for isolated debugging) are significantly harder to detect. The primary detection vectors are:
CPUID checks: The hypervisor present bit (bit 31 of ECX when CPUID leaf 1 is executed) indicates a hypervisor is present. The hypervisor vendor can be queried with CPUID leaf 0x40000000. VMware returns “VMwareVMware”, VirtualBox returns “VBoxVBoxVBox”. An unknown vendor string is suspicious.
MSR timing: Executing RDMSR in a VM introduces additional overhead compared to native execution. Anti-cheats time MSR reads and flag anomalies.
CPUID instruction timing: The CPUID instruction itself is a privileged instruction in virtualized environments and must be handled by the hypervisor, introducing measurable latency.
DMA cheats represent the current frontier of the anti-cheat arms race, and they are genuinely hard to address with software alone.
A PCIe DMA (Direct Memory Access) cheat uses a PCIe-connected device - typically a development FPGA board - that can directly read the host system’s physical memory via the PCIe bus without the CPU being involved. The pcileech framework and its LeechCore library provide the software stack for these devices. The device physically appears on the PCIe bus, acquires access to the host’s physical memory via the PCIe TLP (Transaction Layer Packet) protocol, and reads game process memory by translating virtual addresses to physical addresses (using the page tables, which are also in physical memory and can be read by the device).
The attacking machine (running the cheat software) is physically separate from the victim machine (running the game). All cheat logic runs on the attacker’s machine. The game machine has no processes, no drivers, no memory allocations from the cheat. From a pure software perspective, the game machine is completely clean.
The FPGA sits on the game PC’s PCIe bus and reads physical memory via TLPs without CPU involvement. The cheat software runs entirely on a separate machine, leaving the game PC clean from a software perspective.
PCIe communication is structured around TLPs. A memory read TLP from a DMA device contains the physical address to read and the requested byte count. The PCIe root complex services this request by reading the specified physical memory and returning the data in a completion TLP. This is entirely hardware-level and the CPU is not involved in servicing the request.
The device needs to be configured with a valid BAR (Base Address Register) that the BIOS assigns during PCIe enumeration. The device also needs the target system’s IOMMU (if present) to either be disabled or to have the device’s DMA transactions allowed through it.
The IOMMU (Intel VT-d, AMD-Vi) is a hardware unit that translates DMA addresses from PCIe devices using a device-specific page table (analogous to the CPU’s page tables for usermode address translation). If the IOMMU is enabled and properly configured, a PCIe device can only access physical memory that the OS has explicitly granted to it via the IOMMU page tables.
With IOMMU enabled, a DMA device that is not associated with any OS driver has no IOMMU mappings and cannot access arbitrary physical memory. This is theoretically a hardware-level defense against DMA attacks.
In practice, the IOMMU defense has significant gaps. Many gaming motherboards ship with IOMMU disabled by default. Even when enabled, the IOMMU configuration is complex and many systems have misconfigured IOMMU policies that leave large physical memory ranges accessible. And critically, DMA firmware that successfully impersonates a legitimate PCIe device (e.g., a USB controller or a network card that the OS has granted IOMMU access to) can potentially access memory through the IOMMU using the legitimate device’s granted permissions.
Sophisticated DMA cheat firmware is designed to mimic the PCIe device ID, vendor ID, subsystem ID, and BAR0 configuration of a legitimate device. A Xilinx FPGA running customized firmware can present itself to the BIOS and OS as, for example, a USB host controller. The OS loads the legitimate driver for that device (providing IOMMU coverage), and the FPGA firmware uses that coverage to perform DMA reads.
Anti-cheats attempt to detect this by enumerating all PCIe devices and validating that each device’s reported characteristics match expected hardware. But without specific firmware-level attestation, it is very difficult to distinguish legitimate hardware from a properly mimicking FPGA.
Epic Games’ requirement for Secure Boot and TPM 2.0 for Fortnite is directly related to the DMA threat. Secure Boot ensures that only signed bootloaders run, which prevents boot-time attacks that could disable IOMMU or install firmware-level cheats. TPM 2.0 enables measured boot (where each boot stage’s hash is recorded in the TPM’s PCR registers), providing an attestation chain that proves the system booted in a known-good state. Remote attestation using the TPM can allow a server to verify that the client system has not been tampered with at the firmware level.
This does not solve the DMA problem directly (a DMA attack device physically attached to the PCIe slot bypasses all of this), but it closes some of the software-assisted DMA attack paths.
No static protection scheme is sufficient. Behavioral detection operating on game telemetry is the complementary layer that addresses what kernel protections cannot.
The kernel anti-cheat driver operates at a level where it can intercept raw input before it reaches the game. Drivers for HID (Human Interface Device) input, specifically for mice and keyboards, sit in the input driver stack. By installing a filter driver above mouclass.sys or kbdclass.sys, an anti-cheat can observe all input events with timestamps accurate to the system clock (microsecond resolution).
Aimbot detection targets the statistical properties of mouse movement. Human aiming exhibits specific properties: Fitts’ Law governs approach trajectories, there is characteristic deceleration as the cursor approaches a target, velocity profiles have specific acceleration and deceleration curves, and there is measurement noise. An aimbot performing perfect linear interpolation to a target produces movement that violates these properties. A triggerbot (which fires automatically when the crosshair is on target but does not manipulate mouse movement) is detected via reaction time analysis: human reaction times to a target crossing the crosshair have a minimum physiological floor (approximately 150-200ms) with a characteristic distribution. Reaction times below this floor with high consistency indicate automation.
The Collins et al. (CheckMATE 2024) paper documents the application of CNNs to triggerbot detection, achieving approximately 99.2% accuracy on labeled datasets. The features fed to the network include mouse position time series, click timing relative to target position, and velocity profiles.
The AntiCheatPT paper (2025) applies transformer architectures to aimbot detection. Using 256-tick windows with 44 data points per tick (including position, velocity, acceleration, view angle rates, and click events), the model achieves 89.17% accuracy in distinguishing legitimate players from aimbot users. The transformer architecture is well-suited to this problem because aimbots often introduce temporal correlations in the input data (smooth tracking, periodic corrections) that attention mechanisms can exploit.
Graph neural networks are used for collusion detection (wallhack and communication-based cheating in team games) by modeling player interaction graphs and detecting anomalous patterns, such as players consistently targeting enemies through walls or demonstrating perfect awareness of enemy positions without line-of-sight.
Left: a legitimate player’s mouse path follows Fitts’ Law with a natural S-curve, overshoot, and micro-corrections. Right: an aimbot produces an idle phase followed by an instant linear snap to the target with no natural deceleration.
The flow from raw data to ban decision typically works as follows:
The encryption of telemetry data is critical both for privacy (the data includes all mouse movements) and for anti-tamper (preventing cheats from identifying and falsifying telemetry).
The most reliable VM detection is CPUID-based. When CPUID is executed with EAX=1, bit 31 of ECX is set if a hypervisor is present (this is the “Hypervisor Present” bit). With EAX=0x40000000, the hypervisor vendor string is returned in EBX, ECX, EDX:
<table><tbody><tr><td><pre>1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 </pre></td><td><pre><span>BOOLEAN</span> <span>IsRunningInVM</span><span>(</span><span>void</span><span>)</span> <span>{</span> <span>int</span> <span>cpuInfo</span><span>[</span><span>4</span><span>];</span> <span>__cpuid</span><span>(</span><span>cpuInfo</span><span>,</span> <span>1</span><span>);</span> <span>// Check hypervisor present bit (ECX bit 31)</span> <span>if</span> <span>(</span><span>cpuInfo</span><span>[</span><span>2</span><span>]</span> <span>&</span> <span>(</span><span>1</span> <span><<</span> <span>31</span><span>))</span> <span>{</span> <span>// Get hypervisor vendor</span> <span>__cpuid</span><span>(</span><span>cpuInfo</span><span>,</span> <span>0x40000000</span><span>);</span> <span>char</span> <span>vendor</span><span>[</span><span>13</span><span>];</span> <span>memcpy</span><span>(</span><span>vendor</span><span>,</span> <span>&</span><span>cpuInfo</span><span>[</span><span>1</span><span>],</span> <span>4</span><span>);</span> <span>memcpy</span><span>(</span><span>vendor</span> <span>+</span> <span>4</span><span>,</span> <span>&</span><span>cpuInfo</span><span>[</span><span>2</span><span>],</span> <span>4</span><span>);</span> <span>memcpy</span><span>(</span><span>vendor</span> <span>+</span> <span>8</span><span>,</span> <span>&</span><span>cpuInfo</span><span>[</span><span>3</span><span>],</span> <span>4</span><span>);</span> <span>vendor</span><span>[</span><span>12</span><span>]</span> <span>=</span> <span>'\0'</span><span>;</span> <span>// Known VM vendors</span> <span>if</span> <span>(</span><span>strcmp</span><span>(</span><span>vendor</span><span>,</span> <span>"VMwareVMware"</span><span>)</span> <span>==</span> <span>0</span> <span>||</span> <span>strcmp</span><span>(</span><span>vendor</span><span>,</span> <span>"VBoxVBoxVBox"</span><span>)</span> <span>==</span> <span>0</span> <span>||</span> <span>strcmp</span><span>(</span><span>vendor</span><span>,</span> <span>"Microsoft Hv"</span><span>)</span> <span>==</span> <span>0</span> <span>||</span> <span>// Hyper-V</span> <span>strcmp</span><span>(</span><span>vendor</span><span>,</span> <span>"KVMKVMKVM"</span><span>)</span> <span>==</span> <span>0</span><span>)</span> <span>{</span> <span>return</span> <span>TRUE</span><span>;</span> <span>}</span> <span>return</span> <span>TRUE</span><span>;</span> <span>// Unknown hypervisor is also suspicious</span> <span>}</span> <span>return</span> <span>FALSE</span><span>;</span> <span>}</span> </pre></td></tr></tbody></table>
Each VM platform leaves characteristic artifacts in the registry and device enumeration:
HKLM\SOFTWARE\VMware, Inc.\VMware Tools; PCI device \Device\VMwareHGFS; virtual devices appearing in Win32_PnPEntity with “VMware” in the name.HKLM\SOFTWARE\Oracle\VirtualBox Guest Additions; VBoxMiniRdDN driver; registry key HKLM\HARDWARE\ACPI\DSDT\VBOX__.HKLM\SOFTWARE\Microsoft\Virtual Machine\Guest\Parameters; presence of vmbus and storvsc driver objects.Anti-cheats query these artifacts from kernel mode where they cannot be intercepted by usermode hooking. A system presenting any of these artifacts is likely running in a VM, and the anti-cheat can refuse to operate or flag the session.
Cheat developers sometimes use nested hypervisors to create a transparent analysis environment: they run the game in a VM, with the cheat running in the VM’s host. Detection of nested hypervisors relies on timing anomalies: CPUID executed inside a nested VM is handled by two hypervisors in sequence, introducing double the overhead. RDMSR and WRMSR instructions similarly have amplified latency. Statistical analysis of hundreds of timing measurements can reliably distinguish native execution, single-level virtualization, and nested virtualization.
Anti-cheats collect multiple hardware identifiers to create a unique fingerprint that survives account banning:
NtQuerySystemInformation(SystemFirmwareTableInformation, ...) or directly via the firmware table.IOCTL_STORAGE_QUERY_PROPERTY. These are stable identifiers that survive OS reinstallation.MachineGuid in HKLM\SOFTWARE\Microsoft\Cryptography, or more persistently, the UEFI firmware’s platform UUID accessible via SMBIOS.The Vanguard analysis from the rhaym-tech GitHub gist documents vgk.sys collecting BIOS information, including the system UUID and various SMBIOS fields, which are combined into a hardware fingerprint for ban enforcement.
HWID spoofing involves modifying the identifiers that the anti-cheat reads to evade a hardware ban. Spoofing approaches include:
Anti-cheats detect spoofing by cross-referencing multiple identifier sources. If the SMBIOS UUID is FFFFFFFF-FFFF-FFFF-FFFF-FFFFFFFFFFFF (a common spoofed value), that is an immediate flag. If the reported disk model is “Samsung 970 EVO” but the disk serial number format does not match Samsung’s format, that is a spoof indicator. If the UEFI firmware tables report one UUID and the registry reports a different one, the registry value has been tampered with.
The escalation we have seen over the past decade follows a clear pattern:
Firmware attacks are particularly concerning because they survive OS reinstallation, are invisible to all kernel-level inspection, and are extremely difficult to detect without physical access to the device for firmware verification. There is no widespread anti-cheat defense against firmware cheats today.
The next-generation threat is aimbots powered by computer vision models that run on the GPU or a secondary computer. These systems use a camera or screen capture to analyze the game frame, identify targets, and move the mouse via hardware (a USB HID device, bypassing software input inspection entirely). The mouse movements they produce can be configured to mimic human motion patterns, making statistical detection much harder.
An AI aimbot operating via hardware HID is, from the game machine’s perspective, completely indistinguishable from a human using a mouse. All input comes through legitimate hardware channels. No code runs in the game process. The kernel is entirely clean. The only detection surface is the behavioral profile: the accuracy, reaction times, and movement patterns that the AI produces.
This is why the behavioral ML approaches discussed in section 10 are not optional but increasingly central to effective anti-cheat.
Kernel-level anti-cheat is deeply unpopular among privacy advocates. The criticisms are substantive:
A driver running at ring 0 with boot-time loading has access to everything on the system. While BattlEye, EAC, and Vanguard are not documented to abuse this access for surveillance, the technical capability exists. The ARES 2024 paper’s analysis underscores that the trust model is identical to what we use for security-critical software, which means any vulnerability in these components is a local privilege escalation to ring 0.
The fact that games require installation of boot-time kernel drivers as a condition of play is also a significant attack surface concern. A vulnerability in vgk.sys is a local privilege escalation to ring 0. The anti-cheat software itself becomes an attack target.
The most technically promising direction for anti-cheat is remote attestation. Rather than running a ring-0 driver that actively fights cheats, the system proves to the game server that it is running in a known-good state. TPM-based measured boot, combined with UEFI Secure Boot, can generate a cryptographically signed attestation that specific bootloaders, kernels, and drivers were loaded. The server refuses connections from systems that cannot provide valid attestation.
This is not a complete solution (a sufficiently sophisticated attacker can potentially manipulate attestation), but it significantly raises the bar. Attestation can coexist with traditional scanning to provide defense in depth.
Cloud gaming (GeForce Now, Xbox Cloud Gaming) is architecturally the ultimate anti-cheat for certain game categories. If the game runs in a data center and only video is streamed to the client, there is no game client code to exploit, no game memory to read, and no local environment to manipulate. The cheat attack surface reduces to input manipulation and video analysis, both of which have relatively straightforward detection approaches.
The constraint is latency: cloud gaming is unsuitable for competitive titles where single-digit millisecond reaction times matter. For casual and semi-competitive play, cloud delivery may increasingly be the answer.
Modern kernel anti-cheat systems represent a layered defensive architecture that operates across every available level of the Windows privilege model:
ObRegisterCallbacks, PsSetCreateProcessNotifyRoutineEx, PsSetLoadImageNotifyRoutine) provide real-time visibility into system events with the ability to actively block malicious operations.No single technique is sufficient. Kernel callbacks can be bypassed by DMA attacks. Memory scanning can be evaded by hypervisor-based cheats that intercept memory reads. Behavioral detection can be fooled by sufficiently human-mimicking AI. Hardware fingerprinting can be defeated by hardware spoofers. It is the combination of all these layers, continually updated in response to new evasion techniques, that provides meaningful protection.
The trajectory of this arms race points toward hardware attestation and server-side verification as the ultimate foundations of trustworthy game security. Software-only client-side protection will always be asymmetric: defenders must check everything, attackers need only find one gap. Hardware attestation shifts this asymmetry by making it extremely difficult to demonstrate a trustworthy state while operating a modified system.
Until that foundation is universally available and enforced, kernel anti-cheat remains the best practical defense available, with all the associated complexity, privacy implications, and attack surface that entails.
Collins, R. et al. “Anti-Cheat: Attacks and the Effectiveness of Client-Side Defences.” CheckMATE 2024 (Workshop co-located with CCS 2024). https://tomchothia.gitlab.io/Papers/AntiCheat2024.pdf
Vella, R. et al. “If It Looks Like a Rootkit and Deceives Like a Rootkit: A Critical Analysis of Kernel-Level Anti-Cheat Systems.” ARES 2024. https://arxiv.org/pdf/2408.00500
Sousa, J. et al. “AntiCheatPT: A Transformer-Based Approach to Cheat Detection in First-Person Shooter Games.” 2025. https://arxiv.org/html/2508.06348v1
secret.club. “Reversing BattlEye’s anti-cheat kernel driver.” 2019. https://secret.club/2019/02/10/battleye-anticheat.html
secret.club. “Easy Anti-Cheat integrity check bypass.” 2020. https://secret.club/2020/04/08/eac_integrity_check_bypass.html
back.engineering. “Reversing BEDaisy.sys.” 2020. https://back.engineering/blog/2020/08/22/
Aki2k. “BEDaisy Reverse Engineering.” GitHub. https://github.com/Aki2k/BEDaisy
archie-osu. “Vanguard Dispatch Table Hooks Analysis.” 2025. https://archie-osu.github.io/2025/04/11/vanguard-research.html
rhaym-tech. “Vanguard vgk.sys Analysis Gist.” GitHub Gist. https://gist.github.com/rhaym-tech/f636b76deeca15528e70304b5ee95980
donnaskiez. “ac: Open Source Kernel Anti-Cheat.” GitHub. https://github.com/donnaskiez/ac
Remote attestation is the ultimate surrender. It's not really your machine anymore. You don't have the keys to the machine. Even if you did, nobody would trust attestations made by those keys anyway. They would only trust Google's keys, Apple's keys. You? You need not apply.
False, people that have information they shouldn't have will act in detectable ways, even if they try their hardest not to.
They won way more than they lost, people who left got given a free pass for ratting the remaining people out.
The security of PCs is still poor. Even if you had every available security feature right now it's not enough for the game to be safe. We still need to wait for PCs to catch up with the state of the art, then we have to wait 5+ years for devices to make it into the wild to have a big enough market share to make targeting them to be commercially viable.
This is the exact sort of nonsense situation I want to prevent. We should own the computers, and the corporations should be forced to simply suck it up and deal with it. Cheating? It doesn't matter. Literal non-issue compared to the loss of our power and freedom.
It's just sad watching people sacrifice it all for video games. We were the owners of the machine but we gave it all up to play games. This is just hilarious, in a sad way.
There are far better ways to detect cheating, such as calculating statistics on performance and behaviour and simply binning players with those of similar competency. This way, if cheating gives god-like behaviour, you play with other godlike folks. No banning required. Detecting the thing cheating allows is much easier than detecting ways in which people gain that thing, it creates a single point of detection that is hard to avoid and can be done entierly server side, with multiple teirs how mucb server side calculation a given player consumes. Milling around in bronze levels? Why check? If you aren't performing so well that yoh can leave low ranks, perhaps we need cheats as a handicap, unless co sistently performing well out of distribution, at which point you catch smurfing as well.
point is focusing on detecting the thing people care about rather than one of the myriad of ways people may gain that unfair edge, is going to be easier and more robust while asking for less ergregious things of users.
Simply put, the game companies want to own our machines and tell us what we can or can't do. That's offensive. The machine is ours and we make the rules.
I single out kernel level anticheats because they are trying to defeat the very mitigations we're putting in place to deal with the exact problems you mentioned. Can't isolate games inside a fancy VFIO setup if you have kernel anticheat taking issue with your hypervisor.
You don't play a "match", you don't play "against" other players most of the time, in this context "botting" and "cheating" overlap because having your character do stuff 24/7 unattended is an evident advantage over the rest of the population, but it's not like you are hindering anyone's progress directly the vast majority of the time doing so.
How often does actual cheating happen in WoW, anywhere it matters? M+? Raiding? PvP?
Anticheats, especially kernel-mode ones does not make the problem smaller. All they do is make it more rewarding for capable people.
ESP is a lot more obvious to a machine than one might think, the subtle behavior differences are obvious to a human and even more so for a model. Of course none of that can be proven, but it can increase the scrutiny of such players from player reports.
The scene has shifted immensely in the last few years, everyone and their grandmother has DMA now, I mean you can buy these off amazon now. Korean's are a bit stuck since most of them use gaming cafes so they've been slow adopters, but cafe shops have the benefit of using an old version of hyper-v which allows you to just use the method described above. Hyper-V cheats are the most popular for valorant.
I would argue that valorant and overwatch are pretty much on the same level based on what it feels to play. I've seen just as many visible cheaters in valorant as in overwatch. Although I will admit that I am pretty outdated myself since around mid 2025. Valorant allows you to ** around so that might be related, overwatch bans rage hackers way faster than valorant does as well.
So no, my post is pretty accurate.
The reason cheating is a problem at all is that instead of playing with friends, you use online matchmaking to play with equally alienated online strangers. This causes issues well in excess of cheating, including paranoia over cheating.
Secure boot with software attestation could also be used for good.
I think that’s an incredibly rare stance not held by the vast majority of gamers, including competitive ones.
No amount of netcode can solve the fact that if I see you on my screen and you didn’t see me, it’s going to feel unfair.
> With that goal in mind, we released a patch as soon as we understood the method these cheats were using. This patch created a honeypot: a section of data inside the game client that would never be read during normal gameplay, but that could be read by these exploits. Each of the accounts banned today read from this "secret" area in the client, giving us extremely high confidence that every ban was well-deserved.
Valve has spent a lot of time and money on machine learning models which analyze demo files (all inputs). Yet Counter-Strike is still infested with cheaters. I guess we can speculate that it's just a faulty implementation, but clearly the problem isn't just "throw a ML model at the problem".
Do you have a source for this?
Maybe this has changed since CS:GO, but in that game you could get VAC banned just for booting the game with cheats running, even if you only demonstrated them in a local game against bots.
So are very good players, very bad players, players with weird hardware issues, players who just got one in a million lucky…
When you have enough randomly distributed variables, by the law of big numbers some of them will be anomalous by pure chance. You can't just look at any statistical anomaly and declare it must mean something without investigating further.
In science, looking at a huge number of variables and trying to find one or two statistically significant variables so you can publish a paper is called p hacking. This is why there are so many dubious and often even contradictory "health condition linked to X" articles.
Behavioral analysis is way harder in practice than it sounds, because most closet cheaters do not give enough signal to stand out, and the clusters are moving pretty fast. The way people play the game always changes. It's not the problem of metric selection as it might appear to an engineer, you need to watch the community dynamics. Currently only humans are able to do that.
Anyway, this isn’t the Olympics, a professional sport, or Chess. It’s more like pickup league. Preserving competitive purity should be a non-goal. Rather, aim for fun matches. Matchmaking usually tries to find similar skill level opponents anyway, so let cheaters cheat their way out of the wider population and they’ll stop being a problem.
Or, let players watch their killcams and tag their deaths. Camper, aimbot, etc etc. Then (for players that have a good sample size of matches) cluster players to use the same tactics together.
Treating games like serious business has sucked all the fun out of it.
(Not being sarcastic.)
Kernel anti-cheat isn't an elegant solution either. It's another landmine, security holes, false positives, broken dev tools, and custody battles with Windows updates while pushing more logic server-side still means weeks of netcode tuning and a cascade of race conditions every time player ping spikes, so the idea that this folds to "better code disipline" is fantasy.
The problem is that traditional cheats (aimbot, wallhack, etc.) give users such a huge edge that they are multiple standard deviations from the norm on key metrics. I agree with you on that and there are anticheats that look for that exact thing.
I've also seen anticheats where flagged users have a session reviewed. EG you review a session with "cheats enabled" and try to determine whether you think the user is cheating. This works decently well in a game like CS where you can be reasonably confident over a larger sample size whether a user is playing corners correctly, etc.
The issue with probing for game world entities is that at some point, you have to resolve it in the client. EG "this is a fake player, store it in memory next to the other player entities but don't render this one on screen." This exact thing has happened in multiple games, and has worked as a temporary solution. End of the day, it ends up being a cat and mouse game. Cheat developers detect this and use the same resolution logic as the game client does. Memory addresses change, etc. and the users are blocked from using it for a few hours or a few days, but the developer patches and boom, off to the races.
These days game hacks are a huge business. Cheats often are offered as a subscription and can rank from anywhere from 10-hundreds of dollars a month. It's big money and some of the larger hack manufacturers are full blown companies which can have tens of thousands of customers. It's a huge business.
I think you're realistically left with two options. Require in-person LAN matches with hardware provided by the tournament which is tamper-resistant. Or run on a system so locked down that cheats don't exist.
Both have their own problems... In-person eliminates most of that risk but it's always possible to exploit. Running on a system which is super locked down (say, the most recent playstation) probably works, until someone has a 0day tucked away that they hoard specifically for their advantage. An unlikely scenario but with the money involved in some esports... Anything is possible.
https://www.documentcloud.org/documents/24698335-la22cv00051...
It's kind of weird that we still don't have distributed computing infrastructure. Maybe that will be another thing where agents can run near the data their crunching on generic compute nodes.
People always freak out when I mention secure boot, and the funniest response usually are the ones who threaten to abandon Windows for macOS (which has had secure boot for more than a decade by default)
I'm not super technically knowledgeable about secure boot, but as far as I understand, you need to have a kernel signed by a trusted CA, which sucks if you want to compile your own, but is a hurdle generally managed by your distro, if you're willing to use their kernel.
But if all else fails you can always disable secure boot.
That solution only works on servers hosted by players - I've never seen huge game companies that run their own servers (like GTA) have dedicated server admins. I guess they think they can just code cheaters out of their games, but they never can.
It usually takes months, if not years for cheaters to get banned, but it takes a couple of dollars for a cheater to get a new account and start cheating again. Every time Valve fine tunes their models, they end up accidentally banning more innocent players in the process, so nobody has trust in that system anyways. There's too many datapoints to handle in competitive games, and there is no way to set a threshold that doesn't end up hurting innocent people in the process.
Anti-cheat is not used to "protect" bronze level games. FACEIT uses a kernel level anti cheat, and FACEIT is primarily used by the top 1% of CS2 players.
A lot of the "just do something else" crowd neglects to realize that anticheat is designed to protect the integrity of the game at the highest levels of play. If the methods you described were adequate, the best players wouldn't willingly install FACEIT - they would just stick with VAC which is user-level.
> There are far better ways to detect cheating, such as calculating statistics on performance
Ask any CS player how VAC’s statistical approach compares to Valorant’s Vanguard and you will stop asserting such foolishness
The problem with what you are saying is that cheaters are extremely determined and skilled, and so the cheating itself falls on a spectrum, as do the success of various anticheat approaches. There is absolutely no doubt that cheating still occurs with kernel level anticheats, so you’re right it didn’t “solve” the problem in the strictest sense. But as a skilled player in both games, only one of them is meaningfully playable while trusting your opponents aren’t cheating - it’s well over an order of magnitude in difference of frequency.
There is guidance on "Active" attacks [1], which is to set up your TPM secrets so they additionally require a signature from a secret stored securely on the CPU. But that only addresses secret storage, and does nothing about the compromised measurements. I also don't know what would be capable of providing the CPU secret for x86 processors besides... an embedded/firmware TPM.
[1] https://trustedcomputinggroup.org/wp-content/uploads/TCG_-CP...
The vast, vast majority of skilled FPS players will predict their shots and shoot where they think the enemy player will be relative to the known hit detection of the game. In high level play for something like r6 siege, I’d say it’s 99% shooting before you can possibly know where they are by “feeling”
To you. I’m perfectly happy to run a kernel level anticheay - I’m already running their code on my machine, and it can delete my files, upload them as encrypted game traffic, steal my crypto keys, screenshot my bank details and private photos all without running at a kernel level.
> trying to solve a social problem with technology
I disagree. I’m normally on the side of not doing that but increasing the player pool and giving players access to more people at the their own skill level is a good thing
You may think it's your "god-given right" to cheat in multiplayer games, but the overwhelming majority of rational people simply aren't going to play a game where every lobby is ruined by cheaters.
By this same logic: As far as I'm concerned, if the game developer only wants to allow players running anticheat to use their servers then they're just exercising their god given rights as the owner of the server.
You can argue about the methods used for anticheat, but your comment here is trying to defend the right to cheat in online games with other people. Just no.
* I use easy cheats for single player games - for example, infinite jumps in cyberpunk 2077 are just huge amounts of fun :)
* I have zero desire for cheating in multilayer games. Not some high morality righteous horse, just, what's the point? I have fun even when I lose, and having something else play for you takes away from visceral fun that I get.
* I could understand, even if not agree, people who cheat for profit. That's the basis of all crime everywhere.
* I do not understand people who cheat in multilayer games not-for-profit. It feel you need to have both a) some sort of anti social / non social tendency, and b) dopamine rushes along pathways I don't.
I'd be genuinely curious to hear about your acquaintances who cheat in multilayer for no profit and why they do it :-)
The average cheater is (or was) basically a troll. They delighted in the act of ruining other people’s games, not installing the cheat. The harder you make it for them to get to that point, the less enjoyment they get.
The people you describe who are in it for the thrill of breaking through are not the ones playing 6 hours every night because the game itself is not the thrill. It’s the exploration of the hardware and software. They might get cheats set up, but once it’s working they get bored with the game and move on to another technical challenge.
That's indirectly hindering other players progression, because it causes deflation (so you can't earn as much gold selling your ores); because it causes inflation (more circulating gold, yes, these are contradictory); because it denies other player farm (if bot gathered ore, other player have to search for another vein) and so on; also illegal gold selling increases expectations (other players bought super good gear, why don't you do that) and causes burn-out (because farming gold fairly is much more hard, than just buying it).
But mainly it just makes players angry, because they can see these bots moving in a predetermined route and stealing resources from their noses. I'm not really sure if bots are that bad in the grand scheme of things, but living players certainly don't like to compete with automatons.
There were also cheaters who used instant cast interruptions at arenas, but it seems that competitive PvP is not that popular nowadays so I'm not sure how it's wide spread.
A more sophisticated attacker could plausibly extract key material from the TPM itself via sidechannels, and sign their own attestations.
And it is possible to silently put you into a cheating game match maker, so that you only ever match with other cheaters. This, to me, is prob. the better outcome than outright banning (which means the cheater just comes back with a new account). Silently moving them to a cheater queue is a good way to slow them down, as well as isolate them.
I did main support and tank at master level in OW and beside esp there is 0 benefit of cheating.
Matching based on skill works only as long as you have an abundance of players you can do that based on. When you have to account for geography, time of day, momentary availability, and skill level, you realize that you have fractured certain players far too much that it’s not fun for them anymore. Keep in mint that “cheaters” are also looking for matches that would maximize their cheats. Maybe it’s 8PM Pacific Time with tons of players there, but it’s 3 AM somewhere else with much limited number of players. Spoof your ping and location to be there and have fun sniping every player in the map. Sign up for new accounts on every play, who cares. Your fun as a cheater is to watch others lose their shit. You’re not building a character with history and reputation. You are heat sniping others while they are not realizing it. It may sound limited in scope and not worth the effort for you, but it’s millions of people out there tht ruin the game for everyone.
Almost every game I know of lets players “watch their kill cam”, and cheaters have adapted. The snipped people have a bias to vote the sniper was cheating, and the snipers have a bias to vote otherwise. Lean one way or the other, and it’s another post on /r/gaming of how your game sucks.
This is not well done. Only the server should be able to tell what the honeypot is. The point is to spawn an entity for one or more clients which will be 100% real for them but would not matter because without cheats it has no impact on them whatsoever. When the world evolves such that an impact becomes more likely then you de-spawn it.
This will only be possible if the server makes an effort to send incomplete entity information (I believe this is common), this way the cheats cannot filter out the honeypots. The cheats will need to become very sophisticated to try and anticipate the logic the server may use in its honeypots, but the honeypot method is able to theoretically approach parity with real behavior while the cheat mitigations cannot do that with their discrimination methods (false positives will degrade cheater performance and may even leak signal as well).
For example you can use a player entity that the client hasn't seen yet (or one that exited entity broadcast/logic range for some time) as a fake player that's camping an invisible corner, then as the player approaches it you de-spawn it. A regular player will never even know it was there.
Another vector to push is netcode optimizations for anti-cheating measures. To send as little information as possible to the client, decouple the audio system from the entity information - this will allow the honeypot methods to provide alternative interpretations for the audio such as a firefights between ghosts only cheaters will react to. This will of course be very complex to implement.
The greatest complexity in the honeypot methods will no doubt be how to ensure no impact on regular players.
> The general simplistic answer from those who never had to design such a game or a system of “do everything on the server” is laughably bad.
> So very often with these hyper competitive games played between strangers competing for global ranking, the whole thing turns very toxic, with gamers often seeming to not even enjoy the moment to moment process, often raging at their incompetent team mates or raging at their opponents for supposedly cheating, or whathaveyou.
This is very true! I'll further grant that many competitive video games have pain points that fester this. Competition, facing failure, and recognizing that what they perceived to be a fair challenge wasn't so (e.g. cheating) does sometimes out the worst in people.
However, my point is that competition, and enjoying it, is something that's been fundamentally human for all our recorded history. The sensation of straining against the edge of your capabilities, to overcome a wall, and then succeeding even just barely is supreme. Competitive video games are just a subset of activities that appeal to this. And I think just as much as they are infuriating, they are also good!
Moreover, competitive video games can also be fairly social. Playing a chiller game with friends is one way to socialize, that I have nothing against. But there's also special bonds that are forged through shared struggle, even minor. For example, the fighting game community has a very strong local scene. If you can play fighting games, in most major cities in NA you can attend your local and make friends. With team competitive games, invite your homies.
Once again, I definitely do not dispute that competitive video games can be toxic. Especially in today's online culture. Taking fighting games as an example again, the online, anonymous, communities can be quite toxic. Ah, now that I've written this far, I'm realizing that maybe I've missed your point? Are you saying that it's specifically the strangers, that you never get to know and therefore trust, that makes this worse off?
There should be a physical button inside the case labeled "set up secure boot"
That would mean those who are concerned about the integrity would want to sandbox everything else instead. And even if people are ok with giving up a small bit of perf when gaming, I’m sure they’re even more happy to give up perf when doing online banking.
They will all cluster in very different latent spaces.
You don't automatically ban anomalies, you classify them. Once you have the data and a set of known cheaters you ask the model who else looks like the known cheaters.
Online games are in a position to collect a lot of data and to also actively probe players for more specific data such as their reactions to stimuli only cheaters should see.
But a good way of solving this in community managed multiplayer games is this: if a player is extremely good to the point where it’s destroying the fun of every other player: just kick them out.
Unfair if they weren’t cheating? Sure. But they can go play against better players elsewhere. Dominating 63 other players and ruining their day isn’t a right. You don’t need to prove beyond reasonable doubt they’re cheating if you treat this as community moderation.
In a 5v5 shooter this ruins 9 people’s game along the way, times however many games this takes. Enough people do this and the game is ruined
> or let players watch their killams and tag their deaths
Players are notoriously bad at this stuff. Valve tried it with “overwatch” and it didn’t work at all.
Forgetting about anti cheat for a minute though, may hamming for different behaviours is a super interesting topic in itself. It’s very topical right now [0] and a fairly divisive topic. Most games with a ranked mode already do this - there’s a hidden MMR for unranked modes that is match made on, and players self select into “serious” or “non serious” queues. It works remarkably well - if you ever read people saying that Quick Play is unplayable it proves that the separate queues are doing a good job of keeping the two groups separate!
[0] https://www.pcgamer.com/games/third-person-shooter/arc-raide...
Yes, its prize pool is order of magnitude higher than either of Olympics sports or Chess.
https://www.forbes.com/sites/paultassi/2025/01/20/elon-musk-...
I play fps competitively and valorant is by far the most least cheater fps game on the market
What “Netflix did” was having dead-simple static file serving appliance for ISPs to host with their Netflix auth on top. In their early days, Netflix had one of the simplest “auth” stories because they didn’t care.
I have always wondered why more companies don't do trust based anti cheat management. Many cheats are obvious from anyone in the game, you see people jumping around like crazy, or a character will be able to shoot through walls, or something else that impossible for a non-cheater to do.
Each opponent in the game is getting the information from the cheating player's game that has it doing something impossible. I know it isn't as simple as having the game report another player automatically, because cheaters could report legitimate players... but what if each game reported cheaters, and then you wait for a pattern... if the same player is reported in every game, including against brand new players, then we would know the were a cheater.
Unless cheaters got to be a large percentage of the player population, they shouldn't be able to rig it.
I grew up with star trek and star wars wondering what a “I’ll transfer 20 units to you” meant. Bitcoin was an eye opener in the idea of “maybe this is possible” to me. But it shortly became true to me that it’s not the case. There is no way still for random agents to prove they are not malicious. It’s easier in a network within the confines of Bitcoin network. But maybe I’m not smart enough to come up with a more generalized concept. After all, I was one of the people who read the initial bitcoin white paper on HN and didn’t understand it back then and dismissed it.
And even that's the (relatively) straightforward part. The hard part is doing this without injuring the kernel enough that the only sensible solution for the security conscious is a separate PC for gaming.
The example still kind of applies. In the CS world, serious players use Faceit for matchmaking, which requires you to install a kernel-level anticheat. This is basically what you're suggesting, but operated by a 3rd party.
The computers are supposed to be ours. What we say, goes. Cheating may not be moral but attempts to rob us of the power that enables cheating are even less so.
I rather suspect that the reason for this is the current gaming economy of unlockable cosmetics that you can either grind for, or pay for. If people can cheat in single player or PvE, they can unlock the cosmetics without paying. And so...
My position is this is unfair discrimination that should be punished with the same rigor as literal racism. Video games are the least of our worries here. We have vital services like banks doing this. Should be illegal.
This observation is at least a decade out of date.
The average cheat developer in 2026 is doing it to make money. Either boosting accounts, training accounts to sell, gathering collectibles to sell, or selling access to the cheats themselves.
Some are just addicted, they really love the game, but playing without cheats doesn't make them feel anything so they pick the easiest solution: continue to cheat... forever.
Some are just delusional, they do not want to deal with the reality that they're not good at the game without cheats.
Some are just trolling and want to spinbot piss people off, make people angry. It's what makes them happy.
Some don't have a choice, they started their competitive career with cheats.
Some justify it that "I made the cheat, I deserve to use it"
If you want more I got a whole book of reasons. I am in a unique situation since I happen to be friends from back when I was cheating a lot my self, in that time I established relationships with a lot of developers and personally for me it was curiosity that got me not only into cheating, but the whole process and development. I ended up just making roblox games though.
It is not "fake", a software TPM is real TPM but not accepted/approved by anticheat due to inability to prove its provenance
(Disclosure: I am not on the team that works on Vanguard, I do not make these decisions, I personally would like to play on my framework laptop)
Don't play with untrusted randoms. Play with people you know and trust. That's the true solution.
Not with 100% accuracy. This means some legitimate players would be qualified as potentially cheating.
You don't have to play with wallhacks constantly on, you can toggle. And it doesn't detect cases where you're camping with an AWP and have 150ms response time instead of 200ms. Sometimes people are just having a good day.
> cheating game match maker
This is already a thing. In CS2, you have a Trust Factor. The lower your trust factor is, the bigger the chance you will be queued with/against cheaters.
it is, if you're not cheating and is in fact just that good. That's called competitive sports, which participants voluntarily engage in.
Unpopular opinion: cheaters don’t, griefers do.
“Cheater” is a pejorative for someone who sidesteps the rules and uses technology instead of, uh, pardon a potentially word choice, innate skills. They don’t inherently want to see others suffer as they stomp - it’s a matchmaking bug they’re put where they don’t belong. They just want to do things they cannot do on their own, but what are technically possible. A more positive term for that is a “hacker”.
Griefers are a different breed, they don’t just enjoy own success but get entertained by others’ suffering. Not a cheating issue TBH (cheats merely enable more opportunities), more like “don’t match us anymore, we don’t share the same ideas of fun” thing. “Black hat” is close enough term I guess.
YMMV, but if someone performs adequately for my skill levels (that is, they also don’t play well) then they don’t deprive me of any fun irrespective of how they’re playing.
I agree that killcam tagging is not great for, like, actual “you are breaking the rules” type enforcement (because, yeah, players will generate a ton of false-positives). But if players had a list of traits and match-making tried to minimize some distance in the trait space (admitting it could’ve be perfect), it might result in more fun matches.
Out of curiosity I did a quick internet search and a couple of months ago a new wave of bots has emerged. Those bots also join as majority group but never fully join the game, they simply take up slots in a team, preventing others from joining. Makes you wonder why the server isn't timing them out.
Players in some games with custom servers run webs of trust (or rather distrust, shared banlists). They are typically abused to some degree and good players are banned across multiple servers by admins acting in bad faith or just straight up not caring. This rarely ends well.
I used to run popular servers for PvP sandbox games and big communities, and we used votebans/reports to evict good players from casual servers to anarchy ones, where they could compete, but a mod always had to approve the eviction using a pretty non-trivial process. This system was useless for catching cheaters, we got them in other ways. That's for PvP sandboxes - in e-sports grade games reports are useless for anything.
Problem is that only works if the two OSes are different (Windows vs Linux) or else they can just stomp each other
> And I do want a high end PC for other use cases.,
Right, you don't want two devices (that's fair). How can you _possibly_ trust the locked down device won't interfere with the other open software it's installed side by side with?
But the main point there is that this setup is prohibitively expensive for most cheaters.
Sort of like nuclear weapons
It would add some latency but could be opt-in for those that care enough for all players in a match to take the hit.
Also you can plug a mouse in a console… that's a weird excuse.
Kernel level AC is a compromise for sure and it's the gamers job to assess if the game is worth the privacy risk but I'd say it's much more their right to take that risk than the cheaters right to ruin 9 other people's time for their own selfish amusement
If a community manages a server, it’s basically private property. And community managed servers are always superior to official publisher-managed servers. Anticheat - or just crowd management - is done hands on in the server rather than automated, async, centralized.
Buying the game might mean you have a ”right” to play it, but not on my server you don’t.
I’m talking about normal old fashioned server administration now, I.e people hosting/renting their game infra and doing the administration: making rules, enforcing the rules by kicking and banning, charging fees either for vip status meaning no queuing etc, or even to play at all.
People who engage in competitive sports all agree to it. Most people want to play for fun. They have a natural right to do so.
Valve did it for CS, and it was called overwatch, sorry. [0]
They have inhuman skills usually paired with terrible game IQ and generally awful toxicity. They get boosted up to play with intelligent players purely because they can hold a button to outplay. It gets to the point where you have a player on your team who has no idea how to play but is mechanically good and it breaks the entire competitiveness of the game.
nothing perfect in software world and this is the best tool for its job
> you can achieve the same with user mode anticheats
A user mode anti cheat is immediately defeated by a kernel mode cheat, and cheaters have already moved past this in practice.
A user mode anti cheat (on windows) with admin privileges has pretty much full system access anyway, so presumably if you have a problem with kernel AC you also have a problem with user mode.
Lastly, cheating is an arms race. While in theory, the cheaters will always win, the only thing that actually matters is what the cheaters are doing in practice. Kernel mode is default even for free cheats you download, so the defaults have to cover that.
Cheaters want to dominate other players, feel like they deserve to dominate other players and are perfectly happy for other players to suffer as long as they feel good.
if your pc is so important then maybe don't install these particular software
its all about trade off
It works fine for LAN but as soon as the connection is further than inside your house, it’s utterly horrible.
You can't make a competitive fps game with a dumb terminal, it can't work because the latency is too high so that's why you have to run local predictive simulation.
You don't want to wait the server to ack your inputs.
I played COD4 a lot, though not competitively. I used to say that I had a bad day if I didn't get called a cheater once.
I didn't cheat, never have, but some people are just not aware of where the ceiling is.
The cheaters that annoyed us back then were laughably obvious. They'd just hold the button with a machine gun and get headshots after headshots, or something blatant like that.
But anyway counterstrike did have community policing of lobbies called overwatch - https://counterstrike.fandom.com/wiki/Overwatch
It was terrible as it required the community to conclude beyond reasonable doubt the suspect was cheating, and cheats today are sophisticated enough to make that conclusion very difficult to make
But so far that still seems to be miles away.
If it kills online gaming, then so be it. I accept that sacrifice. The alternative leads to the destruction of everything the word hacker ever stood for.
Best I’ve ever seen was some online discussions about motives, but I never compiled any statistics out of random anecdotes (that must be biased and probably not representative).
First, point of ingress: registry, file caches, dns, vulnerable driver logs.
Memory probe detection: workingsets, page guards, non trivial obfuscation, atoms, fibers.
Detection: usermode exposes a lot of kernel internals: raw access to window and process handles, 'undocumented' syscalls, win32, user32, kiucd, apcs.
Loss of functionality: no hooks, limited point of ingress, hardened obfuscation, encrypted pages, tamper protection.
I could go on, but generally "lol go kernelmode" is sometimes way more difficult than just hiding yourself among the legitimate functionality of 3rd party applications.
This is everything used by anticheats today, from usermode. The kernel module is more often than not used for integrity checks, vm detection and walking physical memory.
Kernel level anticheat isn't a silver bullet, either. It just simplifies the work of the anticheat programmers. I personally think that the silver bullet is behavioral anticheat and information throttling (don't send the player information about other players that he can't see/hear)
There's an exception with fighting games. Fighting games generally don't have server simulations (or servers at all), but every single client does their own full simulation. And 2XKO and Dragon Ball FighterZ have kernel anti cheat.
Well I'm just nitpicking and it's different because it's one of the few competitive genres where the clients do full game state simulations. Another being RTS games.
True of everything. Getting good just lets you see the skill gaps. I've sunk a serious chunk of time into both pool and chess. In both I'd be willing to take a bet that I can beat the median player with my eyes closed (in pool, closing them after walking the table but before getting down on the shot).
And in both of those activities, there are still like 10-20 levels of "person at skill level A should always win against person at skill level B" between me and someone who is ACTUALLY good at pool or chess. Being charitable, in the grand scheme of things I might be an intermediate player.
My understanding of the proposal is that it advertises no invasive anticheat (meaning mostly rootkit/kernel anticheat). So, the value proposition is anyone who doesn't want a rootkit on their computer. This could be due to anything from security concerns to desiring (more) meaningful ownership of one's devices.
And the way community policing worked in the past is that the "police" (refs) could just kick or ban you. They don't need a trial system if the community doesn't want that.
I guess I didn't exactly make that clear...
A few of the arguments advanced by the "anti-anticheat" crowd that inevitably pops up in these threads are "anticheat is ineffective so there's no point to using it" and "anticheat is immoral because players aren't given a choice to use it or not and most of them would choose to not use it".
I don't believe that either of these are true (and given the choice I would almost never pick the no-anticheat queue), but there's not a lot of good high-quality data to back that up. Hence, the proposal for a dual-queue system to try to gather that data.
Putting in the community review of the no-anticheat pool is just to head off the inevitable goalpost-moving of "well of course no system would be worse than a crappy system (anticheat), you need to compare the best available alternative (community moderation)".
The primary one is a standard user-mode software module, that does traditional scanning.
The AI mechanism you're referring to is these days referred to as "VAC Live" (previously, VACNet). The primary game it is deployed on is Counter-Strike 2. From what we understand, it is a very game-dependent stack, so it is not universally deploy-able.
You are hijacking this thread about VOLUNTARY ceasing of freedom as if the small community even willing to install these is a slippery slope to something worse. You have a point when it comes to banking apps on rooted phones and I'm with you on that but this is not the thread for it
if you can design a better one without drawback then you could try to release a better one
So let me summarize the above thread:
Yes, there will always be workarounds for ANY level of anti-cheat. Yes, kernel-mode anti-cheat detects a higher number of cheats in practice, and that superiority seems durable going forward.
There, I think we can all agree on those. No need to reiterate what has already been posted.
We have two possible options here, it's pretty obvious which is the more likely one.
It is pretty ridiculous to suggest that nobody has ever been caught cheating in overwatch pro games.
source: observation of games implying stronger anti-cheat measures over time and customer count staying exactly the same or growing. league of legends is a prime example, although it did create a crater for awhile. this all comes from people who actively sell cheats.
Are players who take advantage of developer-supplied aim assist and other assistive technologies "motivated by a toxic sense of self regard and a desire to humiliate others"?
Do you have evidence valve is working to infect the linux kernel for everyone?
Gonna have to ponder if people who aren't cheating are cheaters.
Mind you, it doesn't mean that the Linux kernel will be "infected for everyone". It means that we'll see the desktop Linux ecosystem forking into the "secure" Linux which you don't actually have full control of but which you need to run any app that demands a "secure" environment (it'll start with KAC but inevitably progress to other kinds of DRM such as video streaming etc). Or you can run Linux that you actually control, but then you're missing on all those things. Similar to the current situation with mainline Android and its user-empowering forks.
anyway: I already edited with the source.
> Or you can run Linux that you actually control, but then you're missing on all those things
We cannot allow this stuff to be normalized. We can't just sit by and allow ourselves to be discriminated against for the crime of owning our own devices. We should be able to have control and have all of those nice things.
Everything is gonna demand "secure" Linux. Banks want it because fraud. Copyright monopolists want it because copyright infringement. Messaging services want it because bots. Government wants it because encryption. At some point they might start demanding attestation to connect to the fucking internet.
If this stuff becomes normal it's over. They win. I can't be the only person who cares about this.
People can dual boot, what's wrong with a special gaming linux distribution?