FaangCoder Glossary

Plain-language definitions of the kernel-stealth, anti-cheat, and FAANG-interview terms we use across the site. Each entry has a stable anchor — link directly to a term with /glossary#<slug>.

A

Anti-cheat

Software that detects and prevents cheating in competitive contexts. In the gaming industry the canonical examples are BattlEye, Easy Anti-Cheat, and Riot Vanguard — kernel-mode drivers that watch for memory tampering, hooked syscalls, and unauthorized overlays. The same engineering surface has been adopted by interview-proctoring platforms like CoderPad Enterprise and CodeSignal IQ, except inverted: instead of catching cheaters in a multiplayer game, they catch candidates running AI assistance during a hiring round. Anti-cheat at proctoring depth typically combines client-side process enumeration, OS-level window listing, and screen-capture diff analysis. The architectural arms race is identical to the gaming case — anti-cheat operates one ring deeper than the cheat. A user-mode AI overlay is detectable by a kernel-mode anti-cheat module; a kernel-mode AI overlay is not, because the proctor doesn't ship a kernel driver to the candidate's machine.

See also: How CoderPad Enterprise anti-cheat detects AI tools, The four stealth layers

B

BattlEye

A commercial kernel-mode anti-cheat product, originally shipped with PUBG, ARMA, Rainbow Six Siege, and other AAA multiplayer games. Architecturally, BattlEye loads a signed Windows kernel driver alongside the game client. The driver hooks key syscalls, walks process and module lists from kernel structures (rather than user-mode APIs that can be spoofed), and validates code integrity against known-good hashes. Cheats that evade BattlEye typically operate at the same kernel depth — DMA cheats, virtual-machine introspection rigs, or competing kernel drivers. BattlEye is relevant to interview stealth because it represents the engineering ceiling for client-side detection: any AI tool that wants to be undetectable on a future proctoring stack with comparable depth needs to operate where BattlEye operates. FaangCoder's four-layer kernel stack mirrors the same engineering surface, applied to evading detection rather than performing it.

See also: The four stealth layers

C

Chrome content script

JavaScript injected into a web page by a Chrome extension or a hosting platform, executing in the page's DOM context. Content scripts can read and modify the page, register event listeners on keydown, paste, focus, blur, and visibilitychange, and watch the editor's text mutations through MutationObserver. Coding-interview platforms ship content scripts that act as their primary detection surface — they run inside the candidate's tab and have full DOM-level visibility into typing patterns, paste events, hotkey presses, and tab-switching. CoderPad Enterprise's content script is the canonical example. The script's blind spot is anything outside the browser process: a native desktop overlay, a kernel-mode hotkey hook, or a process running outside the tab is invisible to the content script. That gap is the reason native overlays beat browser-extension-based AI tools on every modern proctoring stack.

See also: How CoderPad Enterprise anti-cheat detects AI tools

CoderPad / CoderPad Enterprise

A live-coding interview platform widely used by FAANG-tier and tier-1 tech companies. CoderPad ships three surfaces: a free Sandbox (unproctored), a Consumer Interview product (live recruiter session, basic anti-cheat), and CoderPad Enterprise — the proctored variant with content-script tracking, screen-share fingerprinting, and process enumeration via a signed companion app. Enterprise is where every "I got caught using X" thread on TeamBlind and r/cscareerquestions eventually points; it's the round candidates pre-flight against. The Enterprise detection stack typically composes 5+ vectors: keydown event tracking, window-focus deltas, hotkey collision checks, screen-share fingerprinting (including DXGI Output Duplication hooks), and WMI process probes. A user-mode AI overlay with only the WDA flag fails on Enterprise; a kernel-resident overlay does not.

See also: How CoderPad Enterprise anti-cheat detects AI tools, Does CoderPad detect AI usage?

CodeSignal

A coding assessment platform used by FAANG-tier companies for screening rounds (the General Coding Framework / GCF), certified evaluations, and the proprietary CodeSignal IQ keystroke-biometrics system. CodeSignal Certified Evaluations are timed, recorded, and run with anti-cheat screening that includes microphone monitoring, keystroke biometrics, and tab-switch counting. The IQ system attempts to flag AI-generated code by analyzing typing cadence — humans type in irregular bursts, language models produce uniform output if pasted directly. Tools that calibrate typing pace to the candidate's real speed defeat this layer; tools that paste verbatim do not. CodeSignal's microphone monitoring also catches notification sounds during certified evaluations, which is why Focus Assist and notification-sound suppression are part of the Windows-native stealth setup.

See also: Does CodeSignal detect AI?, Best Windows stealth setup

Core Web Vitals (LCP, CLS, INP)

Google's three primary user-experience metrics, factored into search ranking. LCP (Largest Contentful Paint) measures how quickly the largest above-the-fold element renders; the threshold for "good" is under 2.5 seconds. CLS (Cumulative Layout Shift) measures unexpected layout movement during page load; "good" is under 0.1. INP (Interaction to Next Paint) replaced FID in 2024 and measures the latency of user interactions; "good" is under 200ms. The three together approximate "does this page feel fast and stable to a real user." Pages that fail Core Web Vitals get demoted in mobile search results and lose conversion on bounce-sensitive funnels. The PageSpeed Insights API and Lighthouse both report all three; the field-data versions come from the Chrome User Experience Report (CrUX), which is what Google's ranking signal actually consumes.

D

DOM (Document Object Model)

The browser's in-memory tree representation of an HTML document. Every element on a web page is a node in the DOM; JavaScript reads and writes the page by mutating that tree through APIs like document.querySelector, element.appendChild, and MutationObserver. The DOM is the layer where browser content scripts operate — they have full read access to the page's nodes, attributes, and event streams. A browser-extension AI tool lives in the DOM; a native desktop overlay does not. The architectural distinction matters for proctoring: the proctor's content script can enumerate every iframe, every form element, and every event listener on the page, but it cannot see processes running outside the browser tab. That visibility ceiling — the DOM boundary — is the structural reason native overlays beat browser-extension-based AI tools on detection.

DXGI (DirectX Graphics Infrastructure)

Microsoft's low-level graphics API layer that sits between Direct3D applications and the kernel-mode display driver. Relevant to interview stealth because the DXGI Output Duplication API (IDXGIOutputDuplication::AcquireNextFrame) is the standard way modern screen-capture tools — including the proctor companion apps that hook screen-share — read the desktop framebuffer at near-zero CPU cost. DXGI Output Duplication captures the composited frame after DWM has assembled it, which means user-mode tools with the WDA WDA_EXCLUDEFROMCAPTURE flag rely on DWM honoring that flag before compositing. A proctor companion that hooks the duplication path can compare the GPU's actual output against what the candidate's screen-share sends. Window present in GPU surface but missing from the broadcast frame becomes a fingerprint. Kernel-resident overlays sit before DWM in the pipeline, so they're not in the GPU surface for DXGI to duplicate.

See also: Ring-0 memory read vs screenshot OCR

E

Easy Anti-Cheat (EAC)

A commercial kernel-mode anti-cheat product owned by Epic Games, shipped with Fortnite, Apex Legends, Dead by Daylight, and Elden Ring's online mode. EAC's architecture mirrors BattlEye's: a signed Windows kernel driver, syscall hooks, integrity checks against known-good module hashes, and process-list walks from kernel data structures. EAC is the more permissive of the two for VM and Linux compatibility, which has implications for the Linux candidate population. EAC is relevant to interview stealth because it represents the engineering ceiling for what a future proctoring stack could ship. A coding-platform proctor that licenses EAC-equivalent technology and bundles it as a candidate-side companion would be running detection at kernel depth — the same depth our four-layer stack operates at. Today no major proctoring platform ships kernel drivers to candidates; EAC remains the relevant comparison point for the architectural ceiling, not a current threat.

F

FAANG

An acronym originally for Facebook, Apple, Amazon, Netflix, Google — the highest-paying public-tech-company tier in software hiring. Updated informally over time to include Microsoft, Meta (renamed from Facebook), and increasingly Stripe, Databricks, OpenAI, and Anthropic when discussing comp bands. The relevance to interview prep is that FAANG-tier companies share a similar interview shape: 4–7 rounds, heavy on coding (LeetCode-medium and -hard), behavioral via STAR-format storytelling, and system design at L5+. FAANG total compensation at L4 (entry-engineer, post-college) sits in the $180–250K band; L5 (senior) is $300–500K; L6 (staff) is $500K–1M+. The signing bonus alone at L4 typically exceeds $30K, which is why interview-prep tooling has a positive expected-value bet relative to its sticker price.

Fair Screen

A free, open-source candidate-side scanner launched on Hacker News in 2026 that explicitly targets AI interview overlays via OS-level window enumeration. Fair Screen calls Win32 APIs like EnumWindows to identify windows that are non-shareable, click-through, or marked with capture-exclusion flags — exactly the user-mode WDA-based stealth approach every consumer AI interview tool uses. It is rapidly becoming the canonical adversary scanner for the candidate-side discourse: any "is X detectable" Reddit or HN thread now references Fair Screen. Defeating Fair Screen requires moving below the user-mode window-enumeration table — the same kernel-level layer FaangCoder operates at, where the overlay's window record is filtered out before any user-mode enumeration call sees it. Tools that rely solely on the WDA flag fail Fair Screen's scan; tools that operate below window enumeration do not.

See also: The four stealth layers

Fingerprinting

Identifying a user, device, or piece of software by combining multiple signals into a unique signature. Browser fingerprinting reads the user-agent string, installed fonts, GPU driver string, timezone, language, screen dimensions, and dozens of other low-information signals; combined, they produce a hash that's almost-unique per device. Process fingerprinting reads loaded modules, signing chains, parent-PID, and command-line arguments; an AI tool's helper process leaves a fingerprint even if its name is generic. Detection-query fingerprinting watches the answers a process gives to specific OS queries — if a process queries WDA_EXCLUDEFROMCAPTURE on every visible window every 100ms, that polling pattern is itself a fingerprint, regardless of the answer. Stealth in 2026 means consistent answers across query transports plus blending into the noise floor of normal-process behavior.

H

HackerRank

A coding-interview and skill-assessment platform used widely for FAANG and tier-1 phone screens and online assessments. HackerRank ships full-screen mode for proctored rounds, tab-switch detection (counts how many times the candidate left the tab), and a content-script that watches keystroke patterns, copy-paste events, and DOM mutations inside the editor. HackerRank's detection vector for AI tools is principally the content-script layer — same as CoderPad's, but with a slightly different rule set tuned to HackerRank's editor. Tools that take focus away from the IDE or paste large blocks without matching keydown events get flagged in the recruiter-side report. HackerRank also captures focus-blur events on the window level, which means an AI overlay that grabs focus when invoked leaves a fingerprint in the timing log.

See also: Does HackerRank detect AI?

Hotkey

A keyboard shortcut that triggers an application action, typically a chord involving modifier keys (Alt, Ctrl, Cmd, Shift) plus a regular key. AI interview tools use hotkeys to invoke Solve, Debug, Optimize, and similar workflows without requiring mouse clicks that would steal focus from the IDE. The detection-side wrinkle is that browser-side content scripts can register a global document.addEventListener('keydown', ..., {capture: true}) and see every chord pressed inside the candidate's tab. If the chord doesn't produce an editor mutation — i.e., Alt+Enter pressed but no character inserted — the asymmetry is itself a fingerprint. The defense is to intercept the keystroke at kernel depth, before the browser's keydown handler fires. FaangCoder's keyboard hook lives in the Windows kernel, which means the page's content script never observes the hotkey at all.

I

IQ Score (CodeSignal)

CodeSignal's proprietary scoring system for the General Coding Framework, normalized to a 600–850 scale. Beyond the raw score, CodeSignal IQ ships keystroke-biometrics analysis on certified evaluations: it measures typing cadence, inter-key timing, and bursts vs sustained sequences, then compares against known patterns of human typing and known signatures of pasted LLM output. Tools that paste verbatim model output trigger the biometric layer; tools that calibrate to the candidate's real typing speed and feed output through a "type as me" pacing module do not. The IQ score is also factored into screening decisions at FAANG-tier companies — recruiters set thresholds (e.g., 700+ for Meta L4) below which candidates don't advance. The biometric layer is independent of the score itself but contributes to a fairness-review flag that recruiters see alongside the result.

K

Kernel mode (ring-0)

The most privileged execution context on x86 and x64 processors, where code can directly access hardware, modify page tables, and call any system service without restriction. Contrast with user mode (ring-3), where applications run with limited access and have to make system calls to request kernel services. The Windows kernel, device drivers, and the display compositor all run in kernel mode; ordinary applications, including web browsers and most AI interview tools, run in user mode. The privilege gap matters for stealth because anti-cheat detection — and proctoring detection that copies the same architecture — operates more effectively at kernel depth. A user-mode overlay can hide things from user-mode queries; only a kernel-mode component can hide things from kernel-mode queries. Because proctoring stacks don't ship signed kernel drivers to candidate machines, an AI overlay that lives in the kernel structurally outranks the proctor's detection ceiling.

See also: Ring-0 memory read vs screenshot OCR, The four stealth layers

Kernel-mode driver

A Windows software component that runs in kernel mode (ring-0), typically packaged as a .sys file and loaded by the Service Control Manager. Drivers are the standard way to extend the OS — hardware vendors ship them for GPUs, network cards, and storage controllers, and software vendors ship them for anti-cheat, EDR products, and virtualization. Drivers must be Authenticode-signed by a Microsoft-recognized certificate authority on Windows 11 with Secure Boot and HVCI enabled, which is a multi-quarter signing-pipeline cost that filters out casual entrants. Once loaded, a driver has the same privileges as the kernel itself: it can hook syscalls, walk EPROCESS and tagWND lists from kernel data structures, intercept the display pipeline before DWM composites, and filter the data user-mode enumerators see. FaangCoder's Pro-tier stealth lives at this layer.

See also: The four stealth layers

Keystroke biometrics

The measurement and analysis of typing patterns — inter-key timing, dwell time, key-release-to-next-press intervals, burst vs sustained cadence — to identify or authenticate users, or to flag non-human typing patterns. CodeSignal IQ ships this on certified evaluations; some Karat-style mock platforms run lightweight versions. The detection signal that matters for AI interview tools is the contrast between human bursts and LLM-uniform output. Humans type in rapid bursts followed by pauses for thinking; pasted code from a language model has zero pauses and consistent inter-character timing. Tools that paste verbatim trigger the biometric layer immediately. Tools that ship a pacing module — output rendered into the editor at a calibrated pace, with realistic micro-edits and pauses — defeat the layer because the typing pattern matches the candidate's calibrated baseline.

L

LeetCode

A practice platform with 3,000+ algorithm problems used as the canonical interview-prep surface for FAANG-tier coding rounds. Problems are tagged by pattern (sliding window, two-pointer, DP, graph traversal) and difficulty (easy, medium, hard). The Blind 75 and NeetCode 150 are curated subsets that cover the highest-frequency patterns in FAANG interviews. LeetCode itself is unproctored — no anti-cheat, no AI detection — which makes it the safe environment for practicing AI-assisted prep workflows. The risk surface for AI interview tools is the live coding round on HackerRank, CoderPad, CodeSignal, or Karat; LeetCode practice doesn't trigger any of those. The platform also runs live contests and a paid Premium tier that surfaces real recent FAANG questions.

See also: LeetCode patterns guide, Blind 75 vs NeetCode 150

M

Memory read

Direct extraction of data from another process's address space without going through the visible UI surface. Operating systems expose memory-read APIs at user-mode (ReadProcessMemory with appropriate handle privileges) and kernel-mode (direct EPROCESS structure access). Memory read is the input-side architectural alternative to the screenshot-and-OCR pipeline most consumer AI interview tools ship with: instead of capturing pixels off the screen and feeding them through optical character recognition, the tool reads the IDE or browser's source-of-truth — the actual problem statement, the actual code buffer, the actual test output — directly from process memory. The tradeoff is privilege: kernel-mode memory read has fewer restrictions and lower latency than user-mode, and it sees the data before any rendering pipeline could lose it to font fallback, anti-aliasing, or scroll truncation. FaangCoder's Solve hotkey fires a kernel-mode memory read against the IDE/browser process for every Solve, Debug, and Optimize follow-up.

See also: Ring-0 memory read vs screenshot OCR

Mid-round failure

The reliability failure mode where an AI interview tool latency-spikes, hallucinates, crashes, or times out partway through a live coding round. Distinct from stealth failure (getting flagged by the proctor) — mid-round failure is reliability under load, where the tool worked fine in practice but breaks when the candidate has 22 minutes left and three more questions to answer. Documented in Reddit and Trustpilot complaints about Cluely's 5–10 second response latency, Final Round AI's mid-interview Copilot crashes, and Interview Coder's helper-process focus drops. Mid-round failure is functionally as bad as detection because the candidate loses the round either way. The prevention is the architectural reliability of the tool — kernel-resident, no network round-trip on every keystroke, no screenshot pipeline that can stall on a heavy screen update.

MOSS (plagiarism detection)

Stanford's Measure of Software Similarity, a service that detects code similarity across submissions. Used historically in CS coursework to catch students copying assignments; relevant to interview stealth because the same family of similarity-detection algorithms is now applied server-side by some proctoring platforms. CodeSignal's plagiarism check compares submitted solutions against a corpus of known LLM outputs, public LeetCode solutions, and prior submissions in the same evaluation cohort. A solution that matches the GPT-4 Turbo or Claude 4.7 typical-output signature triggers a fairness-review flag, even if no other detection vector fired. Defenses are tool-side: AI tools that paraphrase output through a candidate's prompt template rather than emitting verbatim model text break the corpus match. Tools that paste verbatim do not.

O

OCR (optical character recognition)

The pipeline that converts pixel images of text into machine-readable strings. Modern OCR engines — Tesseract, Google Cloud Vision, Microsoft Azure Computer Vision — work well on clean, high-resolution screen captures but degrade on small fonts, dark themes, anti-aliased text, or scrolled regions where the relevant text is partially clipped. Most consumer AI interview tools route their input through OCR: capture a screenshot, send the image to an OCR service, feed the recognized text to a language model, then render the answer back to the candidate. The architectural cost is latency (3–8 seconds typical end-to-end) and lossiness (problem-statement misreads, code-comment scrambling, missed test outputs). Memory-read tools bypass the entire pipeline by reading the source bytes directly.

See also: Ring-0 memory read vs screenshot OCR

Overlay (graphics)

A graphical surface rendered on top of one or more application windows. Game UI overlays (Discord's in-game overlay, Steam's, Nvidia's GeForce Experience) are the consumer-familiar example; AI interview tools use the same architectural pattern to surface answers on top of the candidate's IDE without alt-tabbing. Overlay implementations vary by depth: a borderless top-most window with transparent background is the user-mode approach; a kernel-mode display-pipeline insertion is the anti-cheat-grade approach. The user-mode version is visible to any process that walks the window list or hooks DWM; the kernel-mode version is filtered out of both. The visible-vs-invisible-to-screen-share distinction is handled separately via the WDA flag in user-mode tools, or via earlier display-pipeline filtering in kernel-mode tools.

P

Process enumeration

The OS operation of listing every running process. Windows exposes three primary user-mode entry points: CreateToolhelp32Snapshot + Process32First/Next, EnumProcesses from psapi, and NtQuerySystemInformation with the SystemProcessInformation info class. WMI's Win32_Process query and Task Manager use the same underlying kernel data — a walk of the EPROCESS doubly-linked list maintained by the executive. Proctoring companion apps with sufficient privilege query the process list and match against a denylist of known AI-tool process names. Tools whose helper executable is named obviously (e.g., InterviewCoder.exe) light up immediately. Defeating process enumeration requires kernel-level filtering — the process exists from the scheduler's perspective but the snapshot returned to user-mode callers omits its row. This is the same construct rootkits use, applied to candidate-side stealth.

See also: The four stealth layers

Proctor / proctoring

A human or automated system that monitors a candidate during an assessment for cheating, environmental violations, or integrity issues. Live proctoring uses a human reviewer watching webcam, microphone, and screen-share feeds in real time; automated proctoring runs algorithms against the same feeds plus anti-cheat telemetry (process lists, window lists, focus events, keystroke patterns) and flags suspicious patterns for human review afterward. Modern coding-interview platforms typically run a hybrid: automated pre-screen catches the obvious cases, human review handles the borderline ones. The proctor's primary detection vectors in 2026 are screen-share fingerprinting, window enumeration (Fair Screen-style), keystroke biometrics, and the candidate's verbal explanation of their code. The last vector is the one no AI tool defeats — see the explain-back blog post for why.

See also: Defending an AI-assisted answer

R

Ring-0 vs Ring-3

x86 and x64 processors define four privilege rings (0–3); modern operating systems use only two: ring-0 (kernel mode) and ring-3 (user mode). Ring-0 code can directly access hardware, manipulate page tables, and call any system service without going through a syscall trap. Ring-3 code runs with restricted access and has to issue syscalls to request kernel services. The boundary between the two is enforced by the CPU itself — ring-3 code attempting privileged instructions traps to ring-0. The architectural implication for stealth: detection at ring-0 outranks stealth at ring-3, and stealth at ring-0 outranks detection at ring-3. Because proctoring stacks operate from ring-3 (they don't ship signed kernel drivers to candidate machines), an AI tool with ring-0 residency structurally outranks them.

See also: Ring-0 memory read vs screenshot OCR

S

Screen capture

The OS operation of producing a pixel-perfect bitmap of the desktop or a window. Windows exposes three primary APIs: BitBlt from GDI (legacy), IDXGIOutputDuplication::AcquireNextFrame from DXGI (modern, near-zero CPU cost), and the Windows.Graphics.Capture WinRT API (newer, more permissioned). All three read a frame that DWM has already composited. The WDA WDA_EXCLUDEFROMCAPTURE flag asks DWM to omit a specific window from the composite, which user-mode AI tools rely on. The flag is queryable, which is why proctors with content-script access can fingerprint tools that set it. The architectural alternative is to filter the overlay's surface from the display pipeline before DWM composites — kernel-mode work that user-mode tools can't reach.

Screen-share

The video-conferencing feature that broadcasts a candidate's screen to one or more remote viewers. Zoom, Google Meet, Microsoft Teams, and Amazon Chime all ship variants. The candidate-side configuration matters for interview stealth: sharing the entire desktop reveals everything visible, including AI overlays, notification banners, and other monitors. Sharing a specific application window restricts the broadcast to that window only, which is the safer default. The proctor's screen-share detection layer reads the broadcast frames and runs computer-vision matching against known AI-tool overlays (overlay chrome, fonts, layouts). User-mode WDA-flagged overlays are usually hidden from the captured frame; kernel-resident overlays are not in the captured frame at any layer of the pipeline.

Stealth / detection-evasion

The architectural property of being invisible to a specific class of detection queries. In the interview-stealth context, stealth means: the overlay is invisible to screen capture, the process is invisible to enumeration, the window is invisible to enumeration, and the hotkey is invisible to the browser's keydown listener. Stealth is layered, not binary — a tool can be stealth-against-the-WDA-flag-query but visible to window-enumeration, or vice versa. A "complete stealth" tool answers every detection query with a story that holds together across transports. The competitive landscape sorts by depth: single-flag (user-mode WDA only) is the lowest tier; deep user-mode (process hiding via API hooks) is mid-tier; kernel-mode with display-pipeline strip and filtered enumeration tables is the highest commercial tier today.

See also: The four stealth layers, Tools that get you caught vs tools that don't

Study Mode

FaangCoder's practice surface — a clickable system-design and algorithm graph where each node expands into the canonical defense (invariant, complexity argument, failure mode) for that pattern. The point is to make the AI's answer the candidate's answer before the interviewer asks "explain your code." Click the "two-pointer" node to see the loop invariant; click the "O(n) amortized" edge to read why each element enters and exits the window at most once. Designed to be used in a 60–90 second window after Solve fires, before the candidate starts typing in earnest. The roadmap includes a Rehearse mode that produces a 60-second talking-script tied to the candidate's actual code in process memory — see the spec at .agents/study-mode-spec.md.

See also: Defending an AI-assisted answer

System call (syscall)

The mechanism by which user-mode code requests a kernel-mode service. On x64 Windows, syscalls go through the syscall instruction with a syscall number in rax and arguments in registers; the CPU traps to the kernel's syscall dispatcher, which validates arguments and routes to the appropriate kernel function. Every interesting OS operation — file I/O, process creation, memory mapping, window enumeration — is ultimately a syscall. Anti-cheat detection often hooks the syscall layer: when a candidate's machine issues NtQuerySystemInformation via WMI, the anti-cheat driver inspects the call before letting it return, applies its own filtering, and returns a curated answer. The same construct, applied to interview stealth, lets a kernel-resident overlay filter the data the proctor's queries see — the syscall layer is where the truth gets generated.

T

Tab-switch detection

A browser-side proctoring technique that counts how many times the candidate's tab loses focus during the assessment. Implemented via the visibilitychange event and the document.hasFocus() API; both are public DOM. HackerRank surfaces tab-switch counts in the recruiter-side report; some Karat-style mock platforms use them as a hard threshold (3+ switches = automatic disqualification). The detection vector catches candidates who alt-tab to a separate AI tool window, switch to ChatGPT in another tab, or open a Discord client mid-round. AI tools that live as native overlays outside the browser don't trigger tab-switch detection because they don't take focus from the tab; AI tools that live as browser extensions or separate browser windows do.

W

WDA (Windows Display Affinity)

A Windows API that controls whether a window appears in screen-capture output. The flag WDA_EXCLUDEFROMCAPTURE, set via SetWindowDisplayAffinity, asks DWM to omit the window from the composite that screen-capture APIs read. It works against simple screen recorders. It does not work against modern proctoring stacks for two reasons: first, the flag is queryable in one syscall via GetWindowDisplayAffinity, so any proctor content script that walks the visible window list and asks each window's affinity sees the flag set and now has a positive fingerprint. Second, the flag operates at the same layer where the proctor is asking — DWM-level — so a proctor companion that hooks the DXGI Output Duplication path and compares the GPU surface to the broadcast frame can spot the discrepancy. Single-flag stealth fails on every Enterprise-tier proctoring platform.

See also: The four stealth layers

WDDM (Windows Display Driver Model)

The architecture by which Windows talks to GPUs and the display pipeline. WDDM splits the display driver into a user-mode component (the UMD, which Direct3D applications interact with) and a kernel-mode component (the KMD, which arbitrates GPU access across processes). DWM, the desktop window manager, sits above WDDM and composites every visible window into a single frame for output. The architectural relevance for stealth: anything that wants to be invisible to screen capture has to operate below DWM, which means below WDDM's user-mode component or inside the kernel-mode driver itself. Microsoft's WDDM design documentation describes the boundary; commercial graphics products and commercial anti-cheat both navigate it. Most consumer AI interview tools do not — they ship at user-mode-WDA depth and rely on DWM honoring the flag, which Enterprise proctoring routinely defeats.

WHQL (Windows Hardware Quality Labs)

Microsoft's signing pipeline for Windows kernel drivers. WHQL submission requires the driver to pass a battery of automated tests — driver-verifier compliance, install-and-uninstall stability, behavior under boot-time stress, and compatibility across hardware classes. A driver that ships to consumer Windows machines with Secure Boot and HVCI enabled needs WHQL-class signing or a Microsoft-issued cross-signed certificate. The pipeline is multi-quarter, expensive, and unforgiving — drivers that misbehave under verifier or that brick a customer machine post-install fail the pipeline and have to be re-submitted. The cost is the moat. Most consumer AI interview tools cannot ship a kernel driver because they cannot afford the WHQL signing cost or the engineering cost to survive every Patch Tuesday without bricking the install base. FaangCoder ships through this pipeline.

Window enumeration

The OS operation of listing every top-level window across all desktops. Windows exposes EnumWindows, EnumChildWindows, FindWindowEx, and GetTopWindow from the user32 library; all of them resolve through the win32k.sys subsystem to walk the kernel's tagWND list. Fair Screen and proctor companion apps both enumerate windows to look for AI overlays — windows marked with capture-exclusion flags, click-through flags, or known title or class names. The defense at the architectural layer is to filter the enumeration result before it crosses back to user mode: the overlay's tagWND record is present from the windowing subsystem's perspective, but the buffer returned to EnumWindows does not include the handle. From the proctor's perspective, no such window exists; from the OS's perspective, the window exists and can receive messages.

See also: The four stealth layers

Windows kernel

The core component of the Windows operating system, comprising the executive (process management, memory management, I/O), the kernel itself (scheduling, interrupts, synchronization primitives), and the win32k subsystem (windowing, GDI). Runs at ring-0 with full hardware privilege. The Windows kernel is what FaangCoder's Pro-tier driver lives inside — same execution context as the rest of the kernel, same data structures (EPROCESS for processes, tagWND for windows), same syscall surface. The architectural decision to ship a Windows-kernel-level overlay is multi-year work: the kernel API surface is ABI-locked across patches, the signing pipeline is unforgiving, and any kernel bug is a bluescreen for customers. The reward is the only execution context where stealth-against-modern-proctoring is structurally robust.

See also: The four stealth layers

Term missing? The Discord is at discord.gg/rApY63vyNZ — we'll add it.