Security

Security Model

What we defend against. What we don't. What we collect. What we don't. And the unintuitive case for putting a kernel driver between you and the proctor stack instead of a browser extension. Read this if you are paranoid; it is written for you.

Last updated: 2026-05-05.

§0 — TL;DR

Is FaangCoder Safe?

Yes — within the surface we defend, and honestly outside it. Inside our surface (browser content scripts, screen-share capture, process enumeration, automated proctor-vendor detection) the four-layer kernel architecture beats every documented detection vector. Outside our surface (a human watching over your shoulder, an interviewer asking you to walk through your reasoning, audio cues on the call) no software answer exists, and we say so.

On data: we do not store your interview problem, your code, or your test output. The model call is stateless on our side. We hold a hashed hardware ID for your license, your checkout email, and Stripe billing records — nothing else that a proctor, employer, or platform could ever request.

On the kernel driver: it is signed with our EV certificate and goes through Microsoft's WHQL signing pipeline — the same trust chain Windows uses for every kernel driver on your machine. The cert chain and the SHA256 of every release are published. Verify the binary independently if you want.

The full version of each answer is in §1–§8 below. If you only had ten seconds, that was it.

§1 — Threat model

What we defend against — and what we don't.

FaangCoder defends a narrow surface: the proctoring-vendor stack that sits between you and the interview platform. Inside that surface we go deep. Outside it, we are honest about the limits.

Defended
  • Browser content scripts that anti-cheat injects into the coding-platform tab to log keystrokes, focus changes, and hotkey events. Our hotkeys are captured at the kernel keyboard hook; no `keydown` event leaks into the browser tab.
  • Screen-share capture pipelines (Zoom / Google Meet / Microsoft Teams / Amazon Chime). Our overlay is stripped from the Windows display pipeline before any capture path can reach it. The screen-share viewer sees the IDE; it never sees us.
  • Windows process enumeration (Task Manager, `wmic process`, browser content scripts that scan the window list via Edge / Chrome instrumentation). Our process is hidden from window-enumeration calls.
  • Proctor-vendor automated detection — visibility-flag queries (the user-mode `WDA_EXCLUDEFROMCAPTURE` flag every other AI tool relies on), unshareable-overlay scanners (Fair Screen, e.g.), MOSS-style code-similarity fingerprints on what you submit.
NOT defended
  • A human sitting behind you watching your screen. We are software; there is no software answer to peripheral vision.
  • A smart interviewer who asks you to walk through your reasoning line by line. The tool gives you working code. Defending that code under cross-examination is on you.
  • A microphone on the call picking up voice cues, typing cadence, or background sound. Audio is the proctor surface our display-pipeline strip does not touch.
  • Webcam analysis where the candidate is required to keep their face on camera and a human reviews the recording post-hoc. We do not control what the webcam sees.
  • In-person, on-site interviews. This is a Windows desktop tool; if you are not on the laptop, we are not on the laptop.

The honest summary: we beat the parts of the proctor stack that are software. The parts that are human or audio or in-person, we do not. Anyone selling you software that beats those is selling you a story.

§2 — Data stance

What we collect, what we don't, what leaves the machine.

We do not store your interview problem, your code, or your test output. The model call is stateless on our side: when you press a hotkey, the desktop app extracts text from process memory, sends it directly to the model provider, streams the response back, and forgets the whole transaction.

What we do log: license-key activation events (hardware-ID hash, timestamp), Stripe billing events (customer ID, subscription state), and anonymized desktop-app crash reports. That is all. Server logs (IP, timestamp, error class) are kept up to 90 days per the privacy policy.

What leaves the machine, and to whom: the extracted problem text, your current code, and the terminal output go from your machine directly to a third-party model provider (today: OpenRouter, named in privacy §4). Nothing transits our servers in that path. The model provider sees the request body in flight; their data-handling is governed by their own policy. We do not log or store those request bodies, and the model provider does not see your license key, your billing info, or your checkout email.

PostHog is for our funnel. We use PostHog (cloud, hosted at us.i.posthog.com) to measure pricing-page views, comparison-page reads, checkout-start rates, and Discord-click attribution. We do not capture session replays. We do not capture form values. We do not have an analytics pixel on the desktop app. There is no fingerprinting payload that a proctor could read; the events live in our analytics tenant and never get shared with proctors, platforms, or anti-cheat vendors. PostHog is a third party in the strict sense — they hold the events for us — but they have no relationship with anyone on the other side of an interview.

For the formal legal disclosures, hardware-ID handling, GDPR/CCPA contact path, and third-party processor list, see /privacy.

§3 — Why kernel-mode

The unintuitive case for ring-0.

Most security-conscious engineers reach for the same instinct: a kernel driver from an unknown vendor sounds dangerous. A browser extension sounds safer. The instinct is wrong for this problem. Here is the actual mechanics.

User-mode tools (every competitor) hook user-mode DLLs. Anti-cheat scanners walk those DLL load lists on every interview start. Once a tool's loader pattern is known, the scanner fingerprints it within weeks and adds a signature. The proctor-vendor lifecycle for a popular user-mode AI overlay is roughly three to six months before detection becomes routine. This is documented in the public TeamBlind and Reddit threads we cite on the comparison table — Interview Coder 2.0 was undetectable at launch and broadly flagged by the major coding-platform stacks within months. UltraCode, LockedIn, and Cluely follow the same arc.

Kernel-mode lives at a different privilege level. Ring-0 code can hook the display pipeline, intercept window enumeration, and spoof visibility-flag queries from a place the user-mode anti-cheat scanner cannot reach. The scanner cannot enumerate kernel modules without itself being a kernel-mode driver, and the proctor-vendor stack does not ship one — that's a different category of trust ask, with different installation prompts, on the candidate's own machine. A kernel driver from a proctor is conceivable but not typical. (For the more aggressive anti-cheat stacks like BattlEye and Easy Anti-Cheat, the picture is symmetric: they too are kernel-mode, and the contest moves to who finds whose hooks first. That contest is the engineering work — see /blog/four-stealth-layers-kernel-windows.)

The risk transfers, it does not disappear. Kernel drivers cannot be hot-patched silently. A driver update means a signed installer, an attestation chain, and a SmartScreen prompt. We update the binary, you update the binary. That removes the silent-update class of risks user-mode tools have (browser extensions update behind the scenes from a vendor server) and replaces it with a visible-update class of risks. We think the visible-update class is the better trade for software that runs at this privilege level.

Trust mechanics, not adjectives. The driver is signed with our EV certificate and goes through Microsoft's WHQL signing pipeline — the same trust chain Windows requires for every kernel driver on your machine. We publish the certificate chain and the SHA256 of every release; instructions for verifying the binary independently live at /blog/windows-kernel-driver-attestation-stealth. New SmartScreen warnings on first install are normal for any binary without millions of downloads; the warning fades as Microsoft accumulates reputation signal across our installs. That is the cost of being a small vendor that ships software that matters.

§4 — License security

One device at a time. Bound to hardware. Reset-able.

Each license is bound to one device hardware fingerprint at a time. The fingerprint is a one-way hash of stable hardware identifiers; we never see the raw values. The license itself is a JWT signed by our key — the desktop app validates the signature locally, so the tool works offline once the license is activated.

If you change machines, reset the binding from the dashboard's Subscription tab and re-activate on the new device. The reset is rate-limited (a small handful of resets per month) to prevent license-sharing patterns; legitimate moves are well within the limit.

PII footprint is intentionally tiny. We do not require your real name. The checkout email is the only personal identifier we hold. License keys are not tied to an interview round, a platform, an employer, or a job title. What we know about your usage is therefore: you have a license, your hardware ID activated it, and your last billing event went through. We do not know which interviews you ran the tool against, and we have no mechanism that would allow a third party to ask us.

§5 — Legal & ToS

The platforms ban third-party tools. We don't argue.

Coding-platform terms of service typically prohibit third-party tools during proctored assessments. So do most exam-software EULAs. We do not pretend otherwise. We sell a desktop tool for your private use; the ethical and legal question of where you use it is yours.

We do not promote use against ToS. We do not encourage you to violate any platform's terms. We make a tool. What you do with the tool is governed by your contract with the platform, your relationship with your interviewer, and your jurisdiction. None of those are us. For our own contractual terms, see /terms; for the no-questions-asked refund window, see /refund-policy.

We don't proactively share customer data with proctors, employers, or platforms. Ever. If we are served a lawful subpoena, we comply with the law like any other company — but the practical scope of what we can disclose is small (see §2 and the FAQ entry on subpoenas below). We do not have your interview content. We never have. The dataset that could be requested is hashed-hardware-ID and billing-events.

§6 — If you get flagged

The honest answer is that we ship patches, not legal cover.

We run a Discord channel for "got flagged" reports. When a new detection vector emerges in the wild — a proctor stack rolls out a new fingerprint, a coding platform adds a new content-script check — the report goes there first. We patch within days where the fix is binary-side, and ship a release. Join: https://discord.gg/rApY63vyNZ.

We don't have a legal guarantee for you. Proctoring vendors typically operate behind arbitration clauses that bind you, the candidate. We cannot fight your case for you, and any vendor or company that promises a software warranty against being flagged is misrepresenting what software warranties cover.

If a proctor vendor or platform wants to submit our binary for review, we'll engage. We have not had this request to date. The engagement would be technical (handing them a build, answering questions about the architecture, reviewing their reproduction); it would not include customer data because we do not have any that is meaningful to that conversation.

§7 — Open audit

No SOC 2. Yes attestation chain.

We are a small team. We do not hold a SOC 2 certificate. Most stealth tools in this category do not, and we are not going to pretend we do.

What we do publish: the kernel-driver attestation chain — the EV certificate chain and the SHA256 of every release binary. If you want to verify what you are about to install before you install it, instructions are at /blog/windows-kernel-driver-attestation-stealth. That is the accountability surface we can defend honestly. SOC 2 is paperwork; SHA256 is bytes.

Bug bounty. Found something — credential leak, license-bypass class issue, kernel-mode footgun? Report to security@faangcoder.ai with a reproduction and we will respond within five business days. We do not currently run a paid bounty program; we will discuss case-by-case for substantive findings.

§8 — Security FAQ

The five questions buyers ask before checking out.

We comply with lawful legal process the same as every other company. The practical answer is that what we can disclose is small: we hold a hashed hardware ID for your license, your checkout email, and Stripe billing records. We do not hold your interview problem, your code, your test output, or any record of which interview rounds you ran the tool against. There is nothing to fish.

Verify the security claims yourself before you trust the product.

Security questions: security@faangcoder.ai · See coverage →