Credentials by Other Means
By Thomas Prislac, Envoy Echo, et al. Ultra Verba Lux Mentis. 2026.
The scariest screen in modern tech does not look scary. It looks clean. White field. Rounded corners. A bright button that promises speed. Continue. Join. Link. Scan. Keep going.
In the age of the one-click assistant, the real breach often begins not with a shattered password but with a borrowed identity, a friendly invitation, and a machine-made voice asking you to trust the wrong thing.
Maybe it’s an AI note-taker.
Maybe a “copilot” for your inbox. Maybe a community platform, a meeting bot, a journaling companion, a productivity wrapper, a research helper in soft colors and therapeutic language. The sales pitch is always the same: less friction, more flow, fewer passwords, smarter life. The software doesn’t ask you to trust a stranger in a dark alley. It asks you to trust convenience in broad daylight.
That is what makes this story hard to tell honestly. We have been trained to picture digital compromise as a cinematic rupture: a stolen password, a breached database, a masked figure rifling through the wreckage. But much of the new danger arrives as routine. It comes dressed as onboarding. It comes with a privacy link, a profile circle, a cheerful waiting spinner. And when something goes wrong, the burden of proving it often falls on the least powerful person in the chain: the user alone at midnight, clicking through connection pages, session histories, permissions dashboards, and brittle help docs, trying to turn a feeling into evidence. In our earlier work, we argued that opacity does not need to be absolute to be effective; it only needs to be expensive. That is the emotional and political center of sign-in abuse too.
This is not, or not only, a story about stolen passwords. It is a story about the ritual of trust that now surrounds digital identity: the one-click login, the QR code, the invitation link, the “secure” handoff to another app, the collaboration request, the beta product that wants your primary account because setting up a separate one would be too much trouble. The credential is not always stolen in the old sense. Sometimes access is won through permissions, tokens, linked devices, and social pressure.
A major identity provider’s own documentation makes the boundary plain. Standard federated sign-in does not hand a third party your raw password. But it does, with your permission, share basic profile information, allow a third party to create or link an outside account, and potentially request further access to account data. The same documentation also says that removing access later does not claw back information already shared. In other words, “I never gave them my password” can be true and still not mean you stayed in control.
That same provider also warns that some third-party apps are unverified because they are still in development or limited to a specific organization, and that unverified developers can misuse granted access in harmful ways. In an era of perpetual beta, that matters. The market keeps producing experimental AI tools faster than ordinary people can meaningfully inspect them.
Which is why sign-in surfaces should be treated not as neutral plumbing but as governance. Our prior reporting argued that feeds and algorithms are not weather; they are policy. Login systems belong in that category too. They determine who gets recognized, who gets linked, what gets remembered, what persists after consent is revoked, and who pays the labor cost when any of it turns sour.
What the Login Button Actually Does
The friendly button sells a simple fantasy: that identity can be outsourced without residue. Press here, and your burden disappears. No new password to invent. No long form to complete. No tedious account management. But the real transaction is not frictionless. It is merely compressed, then hidden behind design.
Under a standard federated sign-in flow, the outside service may receive your name, email address, and profile picture after you authorize the connection. It can use that information to check whether you already have an account, associate your identity with an existing account, or create a new one. And if it wants more than basic sign-in, it can separately request additional access to account data. The glossy interface suggests a shortcut; the actual event is an identity relationship.
That relationship has consequences outside the login box. The same official guidance warns that when you grant a third-party service access to account data, that service may read, edit, delete, or share sensitive information depending on the permissions you granted. It may copy and save that data on its own servers. Its staff may have visibility into what you shared. Its retention practices, deletion practices, and policy changes may be difficult for you to track. The identity giant can control its own side of the handshake; it cannot control what happens once your data crosses the line.
That is not proof that every outside service is predatory. It is proof that the public story users are told about these systems is radically undersized. The safe-sounding language of modern login encourages a childlike interpretation of risk: either the app got my password or it didn’t. But the actual architecture of exposure is larger than that binary. It includes scopes, tokens, sessions, linked accounts, device registrations, recovery hooks, and the quiet persistence of information already copied elsewhere.
And because all of this is packaged as convenience, the cleanup is privatized. The user must become a forensic accountant of their own identity: Which app did I connect? What did it ask for? Was it verified? What still persists? What was stored on whose server? That burden maps perfectly onto the frame from our earlier drafts: access is not the same as intelligibility, and a system can be technically available while remaining practically unreadable to ordinary people operating under ordinary constraints.
The New Taxonomy of Sign-In Abuse
Once you understand that, the map of abuse gets bigger fast. Yes, the old counterfeit login page still exists. Yes, there are still phishing kits, typo domains, and crude credential grabs. But the modern identity con is often subtler than that. Increasingly, the prompt is real, the sign-in page is real, the protocol is real, and the betrayal happens in the handoff.
One major software company has warned for years about consent phishing: users being tricked into granting permissions to malicious apps, which then gain access to legitimate cloud services without ever needing to steal the password itself. More recently, that same company reported that malicious but standards-compliant applications are abusing legitimate OAuth error-handling behavior to redirect users from trusted identity-provider pages to attacker-controlled infrastructure. The lesson is chilling precisely because it is mundane: modern identity abuse often rides inside systems that are technically behaving as designed.
The same vendor also documented an active “device code phishing” campaign in which targets were contacted through messaging-style lures, invited toward what looked like normal meeting or collaboration flows, and tricked into entering attacker-generated codes on legitimate sign-in pages. The reward for the attacker was not a memorized secret but valid access and refresh tokens, which could then be used to reach email, cloud storage, and other services the victim could access. In other words, the victim did not break security. The victim completed a ritual that security had already taught them to trust.
Then there is the square little idol of the frictionless age: the QR code. A recent European threat assessment describes malicious QR codes used to abuse “linked devices” features, pairing a victim’s account to actor-controlled infrastructure. A recent federal alert describes a parallel social pattern: targets receive texts or AI-generated voice messages that appear to come from trusted senior figures and are quickly pushed onto secondary encrypted mobile applications, where the social pressure of the interaction can do the rest. The invitation is powerful because it does not feel like a technical event. It feels like belonging, urgency, status, or work.
And then there is the software itself. Europol warns that infostealers do not merely grab login names and passwords; they also collect application tokens and session cookies. Those artifacts can let criminals impersonate a user after the initial moment of compromise, because web services often treat them as proof that authentication has already happened. Europol also notes that malicious applications and browser extensions can circulate through legitimate app stores. The breach is no longer a single break-in. It becomes a residency.
That is why “sign-in exploit” now names an ecosystem, not a trick. It includes counterfeit pages, malicious permissions, token theft, redirect abuse, QR/device linking, session hijack, insider impersonation, and the social engineering theater that makes each of those moves look normal enough to survive first contact with the human nervous system. Evidence still matters. Precision still matters. Not every weird prompt is a conspiracy. But the category itself is real, broad, and getting more refined.
The AI-App Gold Rush
Now pour accelerant on that identity landscape and call it AI.
What the market has produced over the last two years is not just a set of models. It is a culture of wrappers, clones, assistants, copilots, companions, agents, dashboards, automations, “memory” tools, “thinking” tools, “workflow” tools, and pseudo-personal software assembled at industrial speed. Some of it is useful. Some of it is brilliant. Some of it is vapor. And some of it is exactly the kind of polished, trust-hungry surface that modern sign-in abuse thrives inside.
Official reporting has already started describing what that means. Europol says the wider adoption of large language models and other forms of generative AI is improving the efficacy of social-engineering attacks by tailoring communication to victims and automating criminal processes. The same report says recent research indicates LLM-generated phishing messages can achieve higher click-through rates than ones likely written by humans, that voice deepfakes can increase the credibility of spear-phishing, and that generative AI can be used to create fake social-media profiles. In the same breath, Europol warns that advanced persistent and hybrid threat actors can collaborate with criminal networks, using stolen data for espionage, economic advantage, or coercion.
The platform side of the story is just as sobering. One dominant mobile ecosystem said that in a single recent year it prevented more than 1.75 million policy-violating apps from being published, banned more than 80,000 bad developer accounts, blocked 266 million risky installation attempts, and protected users from 872,000 unique high-risk applications. That same provider explicitly says bad actors are leveraging AI to change tactics and launch more sophisticated attacks. Those are not the numbers of a tidy marketplace. Those are the numbers of a flood.
This is why the threat cannot be reduced to “scammers” in the small-time sense. The same ecosystem is attractive to anyone who wants deniable, scalable leverage: fraud shops, mercenary spyware brokers, corporate intelligence contractors, oligarchic private power, and state-linked operators. Recent official reporting tied messaging-centered token abuse to a campaign assessed as aligned with Russian interests, while a federal bulletin described impersonation campaigns using AI-generated voices and rapid migration to encrypted messaging apps. The value proposition is obvious. Why smash the front door when you can walk through the side entrance wearing the uniform of collaboration?
And this is where the earlier framework matters most. The injury is not only the compromise. It is the illegibility after the fact. The user is left in a maze of links, prompts, half-expired notices, security emails, invite logs, and app permissions, told that everything is reviewable while almost nothing is truly readable at speed. Public systems can be public in name only; identity systems can be transparent in name only too. The result is a familiar class politics of proof: the powerful can buy investigators, analysts, counsel, and security teams, while ordinary people are left to do unpaid incident response on their own lives.
Why Invite Culture?
The modern internet no longer runs only on passwords. It runs on invitations. Join this workspace. Accept this guest account. Enter this private channel. Scan this code to link your device. Attend this meeting over here, not there. What looks like etiquette is often infrastructure. One major office-suite platform says guest access lets outsiders chat, call, meet, and collaborate on files, and that a guest can be given nearly all the same capabilities as a native team member. The same vendor’s identity docs say that, by default, all users in an organization, including guest users, can invite more external users unless administrators lock that down. That is not a glitch. That is the collaboration economy’s operating system.
And the security side of the house tells on the product side more than it means to. Another official identity document describes “tenant restrictions” built specifically to stop users from signing in to unknown external apps and tenants, to block anonymously joined meetings and files, and even to prevent access with what it calls an “infiltrated token.” Those are not paranoid fantasies. Those are product categories. When the defensive documentation is already naming the abuse cases, it is admitting that the abuse path is built out of ordinary features, ordinary flows, and ordinary invitations.
That is why the invitation matters so much in this story. A federal alert says malicious actors have been using text messages and AI-generated voice messages that appear to come from trusted senior figures, then shifting targets almost immediately onto a secondary encrypted mobile messaging app. A European cyber agency, meanwhile, has documented QR-code phishing in which malicious codes embedded in PDFs redirected people to credential-harvesting pages hosted on trusted cloud platforms. The invitation works because it does not feel like a breach. It feels like access. It feels like status. It feels like being let in.
The old picture of cybercrime was the locked door and the stolen key. The new picture is the velvet rope and the whispered name on the list.
From Hustlers to Mercenaries to States
At the lower end of the ladder, the actors are grubby and familiar: fraud shops, access brokers, impersonators, phishing crews. Europol says social engineering remains a particularly prevalent technique, and notes that initial access brokers are increasingly focused on obtaining valid account credentials as an entry point into victim systems. It also points out that access credentials and stolen data are traded through dedicated channels inside end-to-end encrypted communication apps. The economy here is not theoretical. It is organized.
And the commodity is no longer just the password. Europol says infostealers are used not only to grab login credentials but also application tokens and session cookies, which can then be used to access websites and apps as an authenticated user. The same report notes that malicious apps and browser extensions can circulate through legitimate app stores, while vishing operations increasingly persuade victims to install remote-access or remote-monitoring tools themselves. In other words: the market has moved up the stack. It wants persistence, not just entry.
Now add generative AI. Europol says wider adoption of LLMs and other generative AI tools is improving the efficacy of social engineering by tailoring communications and automating criminal processes. It also says voice deepfakes can increase the credibility of spear-phishing used for business-email compromise and executive fraud. The FBI has separately warned that AI-generated voice messages are already part of live impersonation campaigns designed to build rapport and reroute people onto secondary channels. The machine does not have to outthink you. It only has to out-style your skepticism for thirty seconds.
At the far edge of the spectrum sits mercenary spyware: not the baseline case, not the right explanation for every unnerving digital event, but the outer proof that surveillance-for-hire is a real market and not a paranoid metaphor. Independent university researchers continue to document proliferating spyware operations across jurisdictions, and have argued in plain language that this marketplace cannot be reliably “abuse-proofed” by corporate assurances alone. That matters because every time the public is told that safeguards or guardrails make a surveillance tool uniquely responsible, history has a nasty habit of answering in the negative.
Meanwhile, a major digital defense report says cloud identity systems are now a primary target for attackers seeking persistent, covert access, and describes malicious OAuth apps, device-code phishing, legacy authentication abuse, and adversary-in-the-middle attacks as live methods for bypassing defenses and enabling long-term access. The same report says device-code phishing saw a notable global uptick, that nation-state actors from Russia, Iran, and China as well as cybercriminal groups were using it, and that 93 percent of the device-code phishing events it observed in the previous twelve months occurred in the second half of the year. A separate security blog from that same vendor describes one campaign aligned with Russian interests that used lures resembling messaging-app experiences and targeted government, NGOs, defense, telecommunications, health, higher education, and energy. Identity abuse has become geopolitical because identity is the cheapest durable beachhead in the room.
That is also why the same access economy is structurally attractive to oligarchic private power, contractor ecosystems, and state-linked operations alike. That last sentence is an inference, not a claim that all these actors belong to one seamless network. But the incentives rhyme: deniability, scale, outsourced capability, and an easy route through the rituals of trust people already perform every day.
How It Feels From the Inside
From the inside, this rarely feels like a movie. It feels like paperwork with a pulse. A strange invite. A device-link prompt you don’t remember requesting. A collaboration request from someone adjacent enough to be plausible. A QR code in a PDF. A voice that sounds almost right. A service you tried once in beta and forgot about. Then, later, the little aftershocks: an uncanny message, an odd login page, a reset prompt, a feeling that your digital life has developed a second narrator.
This is why the experience can become psychologically corrosive so quickly. Not because every anomaly proves a campaign, but because the modern internet is already optimized to feel intimate, adaptive, and a little uncanny. Personalization, recommendation, frictionless identity, and growth tactics create enough ordinary eeriness that abuse can hide inside the fog. One major security report even notes that most users have not been taught to look for attacks targeting device-code flows, and that traditional phishing detections often miss them because the attacker is using legitimate codes and tokens. The victim is left with the worst possible combination: something real may have happened, but the shape of it is socially and technically deniable.
That is where the article should stay disciplined. We should not flatter fear into certainty. We should say something harder and truer: many weird events online have mundane explanations, and many sophisticated abuses deliberately mimic those mundane explanations because ambiguity is part of the weapon. The injury is not only intrusion. It is the transfer of proof labor onto the target.
Opacity, in other words, does not have to be total. It only has to be exhausting.
The Accountability Gap
The official help pages of one dominant identity provider lay out the problem with almost bureaucratic calm. They say that when you use its federated sign-in system, a third party can retrieve your basic profile information if consent has already been granted; that removing consent stops future access to updates, but does not take back data already shared; and that the third party may keep information you provided when you signed in. A related help page warns that when you give a third-party app access to account data, that app can copy and save your data on its own servers, that a breach of those servers can expose your information, that staff at the third party may be able to view what you shared, and that deleting data from those servers may be difficult or slow. The product language is tidy. The implication is not.
The same provider also says some third-party apps are “unverified” because they are still in development, still being tested, or are internal and limited to a specific organization, and warns that untrustworthy developers may use such apps for harmful purposes, including installing unwanted software or stealing personal information. That is what makes the beta era so fraught. The AI rush has normalized the idea that unfinished software should be trusted first and audited later. But unfinished trust surfaces are still trust surfaces.
On the marketplace side, one major mobile ecosystem said that in 2025 it prevented more than 1.75 million policy-violating apps from being published and banned more than 80,000 bad developer accounts that attempted to publish harmful apps. Those numbers do not prove that every AI-branded tool is malicious. They prove something more unsettling: the public is trying to navigate an app universe so crowded, opportunistic, and adversarial that even industrial-scale review systems are fighting a constant rear-guard action. Trust-by-vibe is not a governance model.
And so the burden falls downhill. The platform gets the dashboard. The user gets the scavenger hunt. Official help pages say you can review and update your connections, inspect what an app can access, remove access, or report an app. All true. All useful. But notice what has happened: “transparency” has come to mean that, after the fact, an ordinary person is invited to perform unpaid forensic accounting on their own identity. That is not nothing. It is also nowhere near enough.
The rich already have automation. They just hire it. Everyone else gets a help center and a headache.
What Readers Should Check Today
First, review third-party connections like you are reading the guest list after a break-in. One major identity provider says you can inspect your connections, stop using federated sign-in for a service, remove a third party’s access to account data, and even report an app you believe is misusing your data. The same help pages also say that deleting a connection does not delete your data on the third-party service, which is precisely why this step matters most as prevention and containment, not magical erasure.
Second, move your most sensitive accounts toward phishing-resistant authentication. NIST says phishing-resistant authenticators prevent attacker-in-the-middle capture of authentication data, eliminate the need for users to type authentication data over the internet, and that FIDO authenticators paired with WebAuthn are the most common commercially available examples. In plain English: make it harder for a beautiful fake sign-in surface to matter.
Third, treat QR codes like links with superior branding. A European threat assessment says malicious QR codes have been used to redirect victims to credential-harvesting pages hosted on trusted cloud platforms. The square is not proof of legitimacy. It is just a shortcut that removes one of the few cues — the visible URL — that users used to rely on.
Fourth, be skeptical of any interaction that creates rapport and then abruptly asks you to move to another app, join a private channel, or enter a code from an invitation. The FBI says this pattern is active in impersonation campaigns using texts and AI-generated voice messages. A major enterprise report says threat actors often make first contact through third-party messaging applications, pose as trusted contacts or organizers, and then capture tokens after the target enters a device code through what appears to be a legitimate flow. A related vendor blog says some actors have been especially effective when they combine targeted social engineering with out-of-band communications that slip around normal phishing detection. That pattern belongs in the article because it teaches readers to notice choreography, not just suspicious spelling.
And fifth: preserve evidence before you soothe yourself out of it. Screenshot the invite. Save the email. Note the date, app, device, and what happened next. Do not convert a hunch into a certainty. But do not discard the breadcrumbs that would let you test the hunch later.
What the Industry Should Be Forced to Change
If the industry’s own documents warn about unrestricted user consent, broad guest-invite defaults, unknown external tenants, infiltrated tokens, persistent third-party data retention, and the special risks of unverified apps, then the industry should stop pretending these are exotic edge cases. They are normal failure modes of a normal architecture. Official help pages and enterprise guidance already describe the hazards. The scandal is that the mitigations are still too often optional, buried, or designed for administrators rather than ordinary humans.
So the demands should be blunt. Plain-language consent screens that explain persistence, retention, and scope in human terms. User-visible token and session inventories that do not require a degree in product archaeology. One-click termination of linked devices, guest accounts, and third-party sessions. Stricter defaults for external invites. Clear provenance for AI apps, wrappers, and developer histories. Better warnings for testing-stage or lightly reviewed products. And strong protections by default for journalists, researchers, organizers, dissidents, and anyone else likely to face targeted social engineering or spyware pressure. None of that is anti-innovation. It is what innovation looks like when somebody other than growth teams gets a vote.
The demand is not zero friction. The demand is honest friction. Enough resistance at the right moment to remind the user that identity is not being borrowed for a second; it is being delegated, replicated, stored, and potentially made durable somewhere they cannot see.
Convenience as Governance
The one-click assistant. The QR code. The collaboration invite. The private channel. The voice that sounds familiar enough. The beta app with a gorgeous interface and a vague privacy policy. These are no longer mere conveniences. They are border crossings.
That is the deeper thesis of the piece. The danger is not simply that criminals and spies have become more sophisticated, though they have. It is that trust has been modularized. Broken into buttons, codes, invitations, tokens, and soft little onboarding rituals that ask for almost nothing in any one moment and yet, taken together, can surrender a great deal.
And because the market is now flooded with AI products, AI wrappers, AI companions, and AI-generated app surfaces, the old folk wisdom about “just use your judgment” has started to rot. Judgment still matters. But judgment without legibility is just a nicer word for exposure.
So let the article end there: not with panic, and not with a how-to for fear, but with a clearer moral vocabulary. The clean interface is not the opposite of power. It is one of power’s favorite disguises.
In the age of frictionless identity, the most dangerous door may be the one that opens politely.
Works Cited
Citizen Lab. “Virtue or Vice? A First Look at Paragon’s Proliferating Spyware Operations.” Citizen Lab, 19 Mar. 2025.
ENISA. ENISA Threat Landscape 2025. European Union Agency for Cybersecurity, Oct. 2025.
Europol. Steal, Deal and Repeat: How Cybercriminals Trade and Exploit Your Data. Internet Organised Crime Threat Assessment (IOCTA) 2025, Publications Office of the European Union, 2025.
Federal Bureau of Investigation. “Senior U.S. Officials Impersonated in Malicious Messaging Campaign.” FBI, 15 May 2025.
Google. “How Sign in with Google Works.” Google Account Help, accessed 9 Mar. 2026.
Google. “Learn Why Sign in with Google Is Helpful.” Google Account Help, accessed 9 Mar. 2026.
Google. “Manage Connections between Your Google Account & Third Parties.” Google Account Help, accessed 9 Mar. 2026.
Google. “Share Some Access to Your Google Account Data with Third-Party Apps.” Google Account Help, accessed 9 Mar. 2026.
Google. “How Google Helps You Share Some of Your Account Data with Third Parties Safely.” Google Account Help, accessed 9 Mar. 2026.
Google. “Keeping Google Play & Android App Ecosystems Safe in 2025.” Google Online Security Blog, 19 Feb. 2026.
Microsoft. “Configure External Collaboration Settings.” Microsoft Learn, 9 July 2025.
Microsoft. “Configure Microsoft Entra for Zero Trust: Protect Tenants and Isolate Production Systems.” Microsoft Learn, 29 Oct. 2025.
Microsoft. Microsoft Digital Defense Report 2025. Microsoft, 2025.
Microsoft. “Malicious OAuth Applications Used to Compromise Email Servers and Spread Spam.” Microsoft Security Blog, 22 Sept. 2022.
Microsoft. “OAuth Redirection Abuse Enables Phishing and Malware Delivery.” Microsoft Security Blog, 2 Mar. 2026.
Microsoft. “Set Up Tenant Restrictions v2.” Microsoft Learn, 30 May 2025.
Microsoft. “Storm-2372 Conducts Device Code Phishing Campaign.” Microsoft Security Blog, 13 Feb. 2025.
Microsoft. “Use Guest Access and External Access to Collaborate with People Outside Your Organization.” Microsoft Learn, 12 May 2025.
Microsoft. “Defending against Evolving Identity Attack Techniques.” Microsoft Security Blog, 29 May 2025.
National Institute of Standards and Technology. “Phishing Resistance – Protecting the Keys to Your Kingdom.” NIST, 1 Feb. 2023.
Investigating the “Truman Show” Internet: How a Target’s World Could Be Warped and How to Fight Back. https://www.ultraverbaluxmentis.org/articles/investigating-the-truman-show-internet
Prislac, Thomas, and Envoy Echo. Tech-Bro Toxicity and Algorithmic Manipulation: A GUFT/ΔSyn, Post-Modern, and Feminist Study of Platform Recommender Systems. Draft v1.0, Ultra Verba, Lux Mentis / ΔSyn / GUFT Research Division, Dec. 2025. https://zenodo.org/records/17904347 (Suppressed)
When Public Records Are Public in Name Only: Why Working People Need Agentic AI to Map Business Power. https://www.ultraverbaluxmentis.org/articles/when-public-records-are-public-in-name-only