You Won't Be Banned. You'll Just Disappear.
AI agents and fake profiles are flooding platforms. Verification is the response. But the price isn't a checkbox; it's your face, in someone else's database.
The most effective form of exclusion doesn’t announce itself. No error message, no appeal process. You simply stop being shown to people who matter, stop appearing in searches, stop receiving replies, and after a while you stop noticing because you’ve stopped expecting anything.
Platforms have been running this kind of soft removal for years on content they don’t like. This practice is known as shadow banning. The difference now is they’re about to run it on people they can’t verify.
That’s where this is going. Not a ban. A filter.
A recruiter on LinkedIn searching for candidates clicks a toggle: “Verified profiles only.” They’re not trying to exclude anyone on principle. They’re drowning. In some hiring categories, AI-generated applications already account for the majority of inbound volume.
A person who has submitted one job application is sitting in the same pile as a bot that submitted four thousand, and the recruiter has no reliable way to tell the difference from the outside. The toggle isn’t malicious. It’s a desperate reaction to a system filled with fake AI-generated resumes.
But the person on the other side of that toggle who hasn’t verified for whatever reason, maybe they’re skeptical of biometric data collection (not surprised there), maybe they simply haven’t gotten around to it, doesn’t get a rejection. They don’t get anything. Their profile continues to exist. It just stops being visible to the people who would act on it.
The invisible category grows quietly until someone points at the numbers. By then the filter is default behavior, not an option. And opting out of verification looks increasingly like opting out of the platform itself, even if the terms of service never say that.
And it’s not just LinkedIn. All the big platforms like Facebook, Instagram, and X (Twitter) are pushing verification in one way or another, promising things like better visibility for your posts and improved filtering.
The flood nobody planned for
The volume of AI-generated content online has crossed some threshold where the old trust signals don’t work anymore. Profile pictures mean nothing. Work histories can be fabricated in seconds. Endorsements can be purchased or generated.
The signals that used to tell a platform “this is a real person with genuine intent” have been so thoroughly mimicked that they’ve lost most of their signal value.
There’s research on this. A 2023 joint report from Georgetown’s Center for Security and Emerging Technology1, OpenAI, and the Stanford Internet Observatory examined how language models could be weaponized for influence operations at scale.
The authors argued that AI-generated content would make existing campaigns harder to detect, specifically because text generation tools produce original output each time they run, removing the copy-paste patterns that researchers had previously used to identify synthetic activity. The threat isn’t just volume. It’s that the old fingerprints stop working.
The recruiting side of this is even more acute. Resume parsing tools have been weaponized. There are now AI services, some running openly, that allow a single person to submit applications at scale, auto-customize cover letters, inject keywords for ATS systems (they do not work, btw.), and generate portfolio samples on demand.
A recruiter at a mid-size sales company in Germany told me last fall, while we were both sitting through a very slow panel discussion on AI ethics that neither of us wanted to attend, that her team had started using a private spreadsheet to track which names appeared more than twice in their inbound pile in a given week. One name had appeared 47 times across different job listings.
That’s not an edge case anymore. That’s the environment recruiters are operating in, and they’re going to reach for whatever tool reduces that noise, including identity verification, even if they haven’t thought hard about what that tool actually does under the hood.
There’s a whole conversation about what platforms could do differently at the algorithmic level to identify synthetic volume without requiring individual biometric compliance. I’m not going to get into that here, partly because I am not an expert on this topic, and I do not think this will go anywhere politically likely in the next few years. However, some form of verification is on the horizon.
Verification is not what the brochure says
When platforms say “verification,” most users picture a blue checkmark. A quick email confirmation. Maybe a government ID scan, the kind you do at a car rental counter and forget about. The reality of what’s being built is considerably more involved.
Persona, one of the dominant identity verification providers used by companies like Coinbase, Brex, and a growing list of HR platforms, runs what the industry calls “identity graph” verification. You submit a government ID. You submit a selfie. The system maps your facial geometry against the ID photo and checks both against a database of known fraud patterns. It also, and this is the part the onboarding flow doesn’t emphasize, retains that biometric data as part of its fraud detection infrastructure.
Persona’s privacy documentation does disclose this. The issue isn’t that they’re hiding it. The issue is that almost no one reads it, and the platforms integrating Persona often don’t surface it clearly in their own UX. You click through a verification modal on LinkedIn or whatever platform has integrated the service, and you’re not thinking about what you’re consenting to. You’re thinking about getting past the gate.
I did this myself last year with a LinkedIn platform that required Persona verification to unlock basic features. I went through the whole flow in about four minutes on a Tuesday morning before I’d had coffee. I didn’t read the privacy policy. I have years of experience thinking about data infrastructure, and I didn’t read the privacy policy.
That bothers me more than I want to admit. But at that time, I thought that I had an old passport, so after the verification, I would still get a new one.
But the biometric data you generate during verification doesn’t just belong to the platform you were trying to use. It lives in a third-party system with its own retention policies, its own breach surface, and its own business model.
If you request the removal of your data, you’ll receive a response stating: “For LinkedIn-related verifications processed through Persona, we apply an automatic redaction policy to personal data collected on our platform. After these timeframes, data is automatically deleted in accordance with our retention practices.” However, they do not specify what these timeframes actually are.
Some of that data will outlast your account on the platform that requested it. Some of it may be shared with other clients of the verification provider for cross-platform fraud detection. Whether you think that’s a reasonable tradeoff depends on who you are and what you’re protecting.
This article, “I Verified My LinkedIn Identity. Here’s What I Actually Handed Over,” did a good job showing where our data goes. Solid article overall, but a few things I disagree with: like the $50 liability cap, sounds scary but it’s unenforceable against EU residents. GDPR Article 82 gives you an independent right to compensation that no US ToS can override.
And that Anthropic, OpenAI, Groq “processing your passport,” being listed as a subprocessor for “Data Extraction and Analysis” almost certainly means they’re used for OCR and document parsing via API, not that your passport is training data for ChatGPT or Claude. Big difference, still it's good to know where your data is going.
Persona, biometrics, and a problem with no clean edges
Biometric data is different from other personal data in a specific way that doesn’t get discussed enough in the context of verification: you can’t change it. If your password is compromised, you change your password. If your email address is leaked, you make a new one. Your facial geometry, your fingerprint, the vein patterns in your iris, these are not things you can rotate.
A data breach involving biometric records is permanent exposure. Not exposure until you update your credentials. Permanent.
There have been significant biometric breaches already. The BioStar 2 platform, used by banks, the UK Metropolitan Police, and defense contractors, exposed over a million fingerprints and facial recognition records in 2019. The data was sitting in an unprotected database. Researchers found it in less than a day. The people whose fingerprints were in that database had no way to remediate the exposure. They still don’t.
Researchers looking into Discord’s age-verification system found an exposed frontend linked to Persona, the identity verification service Discord uses.
What I’m describing here is a structural risk in the category of service they represent, not an accusation about their particular security practices. The risk exists whether or not any specific provider has been compromised yet.
There’s also the question of who audits the verifiers. Platforms audit their vendors to varying degrees. The verification providers are audited by their enterprise clients, who are mostly checking for uptime and compliance certifications, not for the downstream handling of the biometric data they’ve collected on behalf of thousands of users.
I don’t have a clean answer to what that oversight actually looks like in practice. I’ve asked people who should know and gotten answers that were confident but vague.
That question, who watches the watchers, stays with me. I haven’t been able to resolve it.
Invisible by design
This is how it actually works in practice, the invisibility part.
Platforms don’t need to mandate verification. They just need to create systems where unverified users receive slightly less distribution, slightly fewer impressions, slightly lower placement in search results. Nothing the user can point to. No policy they can cite in a complaint. Just a gradual dimming.
This is not speculation. Content moderation researchers have documented this approach across multiple major platforms going back to 2018, when Facebook and Instagram became the first to publicly acknowledge algorithmically reducing engagement with what they called “borderline content.”
Tarleton Gillespie, who has studied platform reduction policies in depth, argues that demotion exists precisely because removal creates conflict. Banned users complain, appeal, and sometimes organize. Users whose reach quietly drops by 80% mostly just assume their content isn’t connecting and post less.
The problem is that “targeted account types” expands over time. It starts with obvious spam. Then it includes accounts with incomplete profiles. Then accounts without verified contact information. Then accounts without verified identity. Each step feels like a minor trust improvement. Each step also slightly raises the barrier for participation.
Sometimes I think about how completely we’ve accepted the idea of the velvet rope. Not the good clubs with the velvet rope, which most people can’t get into and know they can’t get into, but the mundane version: the slightly better seat on the airplane, the faster security lane, the separate boarding queue.
These things were aggressively contested when airlines first introduced them. Now they’re invisible to most people in the same way the price of a checked bag is invisible until you’re at the counter. Not because resistance succeeded but because resistance failed, and then the next generation grew up thinking this was just what flying was.
Maybe that’s not a perfect parallel to digital verification, but you get my point.
Where I can’t follow my own argument
The part I keep getting stuck on is this: if AI agents really can simulate human behavior at scale, and the evidence suggests they increasingly can, then what exactly is the alternative to some form of human verification?
The answer I want to give is “better detection on the platform side.” But I’ve talked to enough people building detection systems to know they’re in a genuine arms race, and the offense is currently ahead of the defense.
Generating convincing synthetic behavior is cheap. Detecting it reliably is expensive and error-prone. The false positive rates on current AI content detection are bad enough that deploying them at scale would catch a lot of real people in the net.
So verification starts to look like the less-bad option. Not the good option, less bad.
I’m genuinely uncertain whether the people resisting identity verification are protecting something meaningful or acting out a preference that’s already becoming impossible to sustain. Perhaps they’re smarter than the rest of us who were foolish enough to click the “verify” button. My instinct is that the resistance is right on principle and losing on the ground. That’s not a comfortable place to land.
The version of this argument I can’t dismiss is the one made by people in countries where digital identity infrastructure has been used for suppression. India’s Aadhaar system was sold as financial inclusion. In states like Jharkhand, biometric match failure rates ran close to 50%, meaning nearly half of enrolled residents couldn’t authenticate to receive food rations. Human Rights Watch documented cases where that failure translated directly into starvation. The exclusion wasn’t always intentional. Sometimes it was just a fingerprint reader that didn’t work. That almost makes it worse.
China is the sharper version of this concern, the unified social credit score or well-documented cases in Xinjiang: iris scans, DNA collection, facial recognition tied to movement controls, used specifically against the Uyghur population. That’s not a hypothetical misuse. It happened, and the identity infrastructure made it possible at scale.
I’m writing primarily about professional platforms in Western contexts, where the power dynamics are different. But the infrastructure, once built, doesn’t stay in one context.
That’s probably worth thinking about more than most people doing the verification flow on a Tuesday morning actually do. More than I did, anyway.
The title of this article isn’t a metaphor. It’s a description of a mechanism that already exists and is being quietly extended. You won’t receive a notice. There’s no form to contest. The platform will continue to host your account, your posts will technically be public, and your profile will load if someone types in the direct URL.
The disappearance happens upstream, in the filters and ranking signals and recruiter toggles that determine whether anyone who doesn’t already know you exists ever finds you. That’s a different kind of removal than anything platforms have done before, and it requires a different kind of attention to notice.
The people most affected won’t be the ones who refused verification on principle after reading the privacy documentation carefully. They’ll be the ones who never got around to it, who didn’t have the right ID format, who found the flow confusing, who verified once and had the data rejected for reasons the system didn’t explain.
Quiet exclusion disproportionately lands on people who already have fewer resources to navigate bureaucratic friction. That’s not an accident of design. It’s usually a feature of how these systems scale.
In Case You Missed It:
Josh A. Goldstein et al., "Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations," Georgetown CSET / OpenAI / Stanford Internet Observatory, 2023. https://cyber.fsi.stanford.edu/io/publication/generative-language-models-and-automated-influence-operations-emerging-threats-and









