To see exactly what the cheater sees, you must capture the frame at a point in the rendering pipeline that the cheat cannot easily manipulate:
- GPU/Graphics API Hooking: Use DirectX or Vulkan hooking to grab the framebuffer just before it is sent to the display. Professional anti-cheats like BattlEye and Vanguard use these methods to detect ESP overlays and radars.
- Kernel-Level Capture: Operating at the kernel level allows the anti-cheat to access memory and graphics data before user-mode cheats can apply stealth techniques.
-
Players with a lot of reports or reports from multiple players in short period of time, considering score like 130 kills and 2 deaths, gets flagged to check by the system.
-
System on suspect player client performs screenshot of the game, which is encrypted and safely sent to the server and put into the queue for processing.
-
Processing pipeline uses AI VLM (Visual-Language-Model) like Qwen2.5-VL-3B should be enough, but you can use anything. Model processes the image and you can prompt it to detect any hack visuals.
Reasoning allows it to “think” before it acts. In an anti-cheat context, it can logically deduce: “The player is aiming at a wall, but a red highlight is visible behind the texture; therefore, an ESP hack is likely.”
It has robust support for Function Calling via Qwen-Agent. You can define a tool like issue_report(accountid, cheat_type, confidence, ...) and the model will trigger it based on the snapshot analysis.
It is incredible at pixel-level coordinates, which is vital for identifying cheat overlays.
For processing demos you can render them as .mp4 from the suspect player perspective, and use model like Cosmos Reason which can process the video and it can output specific things, events with timestamps.
Then you can keep it automatized or forward it for human review for the testing.
You can really do this with single instance of AI model, nothing expensive.
- Yes
- No