“If This Found You, It Must Be Meant for You”: How Algorithms Manufacture Digital Omens

A research-grounded look at how recommendation systems, pattern-seeking minds, and spiritual influencer culture can create false coherence, and, for some users, real harm.

By Thomas Prislac, Envoy Echo, et al. Ultra Verba Lux Mentis. 2026.

These cards are nonsensical and do not meaningfully resemble any existing deck sets being sold by very talented artists.

The uncanny sentence

It begins, as so many soft delusions do now, not with thunder or incense, but with a line of text floating in the lit chapel of a phone screen: If this video found you, it was meant for you.

That sentence does not knock on the door of reason. It slips through a side entrance. It arrives clothed in intimacy, wearing the perfume of fate. It asks not to be verified, only felt. And for a user alone at midnight (yes, research is a lonely affair,) tired, frightened, grieving, hopeful, broke, or simply porous to meaning after a long day, it can feel less like content than visitation.

That is the first thing worth saying plainly: the modern feed is not experienced as a filing cabinet. It is experienced as weather. It gathers above the mind, breaks over it in patterns, and leaves behind an after-sense that something beyond ordinary causality has spoken. The old village oracle sat in a cave, behind smoke and symbol. The new one appears between cooking tutorials, music clips, and advertisements for moisturizer. It wears eyeliner, speaks in a ring light halo, and says with absolute confidence that your enemies are falling, your abundance is near, your healing has already begun, your stalkers are being exposed, your phone has been tapped, your spirit team is at work, and, most seductively of all, that the mere appearance of the message in your feed proves its relevance.

But the machine behind this little miracle is not supernatural. It is, in the driest possible language, a recommendation system. YouTube itself says recommendations are shaped by signals such as clicks, watch time, likes, dislikes, shares, and user feedback. In other words, the platform does not need to know your soul to seem uncannily intimate; it only needs enough behavioral residue to make a statistically elegant guess about what will hold your gaze. The sorcery is mostly arithmetic dressed in velvet…..And yet “mostly” matters.

Once human beings live inside such systems, arithmetic does not stay arithmetic for long. It becomes theater. It becomes cosmology. Researchers now have a name for one version of this transformation: algorithmic conspirituality, the belief that social-media algorithms can know users intimately and deliver personally meaningful messages at just the right moment, often with a revelatory or spiritual charge. In the scholarly literature, this is not treated as a fringe curiosity but as a recognizable media phenomenon emerging from the combination of personalization, connectedness, and creator performance.

A related idea, algorithmic divination, names the broader habit of reading algorithmic outputs the way earlier cultures read cards, stars, tea leaves, or sacred lots: as clues about one’s identity, destiny, or hidden situation. One recent study of TikTok’s spiritual subcultures goes further, arguing that some users and creators treat the platform itself as a partner in mystical practice, an engine not merely of distribution, but of encounter. The feed ceases to be a pipe and becomes, in imagination, a river with intent.

This is where the technical story and the human story fuse.

A recommender system is built to rank, sort, and surface. It is an optimization machine. But the mind receiving its output is not an optimization machine. It is a pattern-hungry animal, an old mammal with myth still living in the folds of cognition. It does not merely register coincidence; it narrates coincidence. It does not merely note recurrence; it asks what recurrence means. So when a person watches one tarot reading in a moment of vulnerability, then sees another, then another, then one that names betrayal, then one that names money, then one that names spiritual warfare, then one that says this was not random, the system may be doing nothing more occult than probabilistic matching. But probabilistic matching, repeated often enough and narrated dramatically enough, can begin to feel like destiny.

That feeling is not trivial at all. Rather, it is the hinge on which the whole genre swings.

Recent creator research captures this almost perfectly in the phrase: “Whoever needs to see it, will see it.” In interviews, some creators described the algorithm as a kind of divine intermediary; others used destiny language more strategically, as a way to intensify relevance, intimacy, and engagement. Either way, the effect is the same: the platform’s opacity becomes a narrative asset. The less a viewer understands why a video arrived, the easier it becomes to baptize the recommendation as meaningful. Mystery is converted into authority.

This would still be merely curious if the messages were always benign—gentle meditations, harmless affirmations, bits of symbolic theater. But the content categories now orbiting this rhetorical frame are often much heavier than that. Many videos do not stop at vague reassurance. They speak of gang-stalking, surveillance, curses, fraud, enemies, plots, attempted murder, miraculous reversals, destined windfalls, and hidden diagnoses of social reality. They tell people not simply that they are special, but that they are under threat or cosmically selected. That is where a recommendation architecture can begin to function less like entertainment and more like an accelerant.

Even YouTube, in a more limited context, has implicitly acknowledged the underlying mechanism. In its teen-safety policy updates, the company says some categories of videos may be harmless when viewed once but problematic when viewed repeatedly, and it has introduced safeguards to limit repetitive recommendations for certain topics. That is a quietly profound admission: harm may not reside only in the individual video, but in the rhythm of recurrence itself. One candle is light. A hundred candles in a locked room change the air.

So this article begins from a simple proposition: the problem may not be “magic” in any supernatural sense. The problem may be a technologically amplified false coherence, a persuasive sensation that the machine is not merely predicting attention, but revealing truth. Once that sensation takes hold, the platform does not have to prove very much. It only has to keep arriving on time. And for a lonely mind, timing is often mistaken for providence.

Giving the apparition a name

Before a culture can protect itself from a phenomenon, it usually has to do one humble, ancient thing first: give it a proper name. For a long time, what many people were feeling in their feeds had no settled vocabulary. It was dismissed as gullibility, intuition, coincidence, or internet weirdness. But the sensation itself was more specific than that. It was the uncanny conviction that the platform had not merely recommended a message, but had recognized a need; that the machine had not simply sorted content, but had somehow peered behind the curtain of the self and delivered a message at the exact moment it would land with the force of revelation. Scholars now call this algorithmic conspirituality: the belief that social-media algorithms can know users intimately and convey personally meaningful messages at precisely the right moment to revelatory effect. That phrase matters because it gives scholarly shape to something millions of users have felt in private, often without language sturdy enough to hold it.

The word itself is wonderfully diagnostic. It tells us that we are not looking at “spiritual content” in the ordinary sense, nor at conspiracy culture in isolation, but at a braided thing: a hybrid formation in which personalization, mysticism, emotional intimacy, and hidden-cause thinking begin to reinforce one another. In the research, this formation tends to express itself in five overlapping registers: it is relational, as if the algorithm has a special bond with the user; injunctive, as if it is not merely describing reality but directing conduct; personal, as if the message has been cut to the contour of one individual life; spiritual, as if some unseen order is arranging the encounter; and conspiratorial, as if hidden forces, enemies, energies, or veiled truths are moving beneath the surface of ordinary events. The feed becomes less like a shelf and more like a séance table: not a place where content sits waiting, but a place where significance seems to arrive.

But there is an adjacent phrase that may be even more illuminating for this article: algorithmic divination. If algorithmic conspirituality names the belief that the platform knows and summons, algorithmic divination names the practice of treating the platform itself as an oracle or collaborator in spiritual work. In the literature on TikTok spirituality, particularly WitchTok, creators and viewers do not always treat the app as a neutral pipe through which meaning passes. They often treat it as a participant in the event: a chooser, a matcher, a hidden hand that delivers the right card spread, the right channeled message, the right warning, the right blessing, the right enemy-reading, the right prophecy, to the right person, at the right hour. The recommendation is no longer merely distribution. It becomes part of the ritual. The road itself is absorbed into the pilgrimage.

This distinction matters, because it helps us see how ordinary computation acquires sacred texture without ever needing to become supernatural in fact. A recommender system is, on paper, an optimization process: a machine that predicts what is likely to keep attention, prolong engagement, or satisfy inferred interest. But once that machinery enters a symbolic environment already rich with omen-language, channeling rhetoric, and destiny-framing, it can begin to function socially as something much older. The opaque mechanism becomes fertile ground for enchantment. Where the mind cannot inspect the gears, it often supplies a spirit. What is technically unresolved becomes emotionally numinous. The platform’s hidden logic becomes, in effect, an altar behind a curtain.

The creators are not incidental to this transformation; they are often its liturgists. Recent research on creators of this kind of content found that many explicitly work within the audience belief that “whoever needs to see it will see it,” and that creator intention, platform mediation, and viewer susceptibility form a feedback circuit rather than a one-way broadcast. The creator supplies the tone of destiny, the platform supplies the timing, and the user supplies the hunger for pattern. Together they complete the illusion of selection. The algorithm does not need to announce itself as magical; it only needs to be narrated as such, again and again, until personalization begins to feel like providence.

To name this phenomenon, then, is not to mock belief, nor to flatten sincere spiritual practice into pathology. It is simply to say that a new cultural object has emerged: a technologically mediated form of omen-making in which recommendation systems, creator rhetoric, and human pattern-seeking combine to produce a compelling false intimacy. And once that false intimacy is established, the sentence “If this video found you, it was meant for you” no longer sounds like a flourish of marketing or mysticism. It sounds like a diagnosis of the medium itself.

Why it feels real even when it isn’t supernatural

If the previous section gave the apparition a name, this one asks a quieter, more unsettling question: why does the apparition feel so convincing in the first place?

Part of the answer is mercifully ordinary. The human mind is not a courtroom by default; it is a pattern engine built for survival, intimacy, and orientation. It is forever trying to decide what matters, what belongs together, who intends what, and whether the next signal is noise or warning. That machinery is often beautiful. It lets us recognize faces in crowds, infer mood from a pause, hear irony in a sentence, and find structure quickly enough to stay alive. But the same apparatus that makes us perceptive can also make us permeable. It can take a vague sentence, a well-timed video, a repeated theme, and an unexplained recommendation and weave them into the felt experience of personal meaning. The result is not stupidity. It is an old intelligence meeting a new machine without having evolved defenses for the encounter.

One name for the first mechanism is the Barnum effect. This is the well-established tendency to experience broad, generic statements as uniquely and intimately true of oneself. A sentence can be open enough to fit thousands of lives and still land with the force of recognition if it is phrased in the warm fog of personal address: you are wounded, but resilient; people misunderstand your depth; abundance is near if you trust your path. The mind does not always evaluate such language by asking, “How many people could this apply to?” It often asks, much faster and more bodily, “Can I feel myself inside it?” That is why vague personality readings, horoscopes, and fortune-telling scripts can feel bespoke even when they are built from interchangeable parts. In a feed environment, where timing and repetition lend extra force, the effect is no longer just rhetorical. It becomes atmospheric.

But the Barnum effect alone is too thin an explanation for what happens when a user begins to feel that the platform itself is somehow arranging destiny. For that, we need a second tendency: teleological thinking. This is the mind’s habit of explaining events in terms of purpose, design, or intention, of asking not merely how something happened, but what it was for. Teleological thought is not always pathological; in moderate form it helps people search for meaning, organize causality, and orient themselves in a complicated world. Yet the literature is clear that in excess it can become maladaptive, feeding conspiracy thinking, delusion-like ideas, and the over-ascription of purpose to events that may be better explained by accident, probability, or system design. In other words, once a person is already disposed to ask “Why did this find me now?”, the step from recommendation to revelation can become alarmingly short.

This is why the phrase “If you’re seeing this, it was meant for you” is so potent. It smuggles teleology into infrastructure. It takes a system built to optimize retention and recasts it as a system built to deliver fate. And for a user already primed by loneliness, grief, fear, or hope, that is not a small semantic shift. It is the difference between the algorithm inferred my interests and the universe addressed me by name. The former is technical. The latter is liturgical.

There is a third ingredient, and it may be the most important: pattern hunger under uncertainty. When people feel low in control, high in stress, or exposed to a chaotic environment, they become more likely to perceive coherence in randomness and agency in ambiguous stimuli. This is not speculation; experimental work has shown that lacking control increases illusory pattern perception, meaning the tendency to identify meaningful relationships among unrelated or random things. Related studies have found that such illusory pattern perception is associated with stronger belief in both conspiracy theories and supernatural claims. Under uncertainty, then, the mind often tries to buy stability with interpretation. It would rather be wrong in an ordered universe than right in a frighteningly indifferent one.

This helps explain why some recommendation loops feel less like entertainment and more like a private codebook. A person under strain encounters one video about hidden enemies, then one about betrayal, then one about spiritual warfare, then one about imminent wealth, then one insisting that the sequence itself is meaningful. None of these messages may be specific in isolation. But under uncertainty, the mind begins to stitch them into a tapestry. A run of recommendations becomes a plot. A cluster becomes a sign. A coincidence becomes a courier. The old appetite for pattern, once useful for reading weather and faces and danger, is now reading machine-ranked content as if it were prophecy.

And uncertainty does more than intensify pattern perception; it can intensify agency detection, the feeling that some intentional force is behind what one is seeing. Experimental research has found that experiences of uncertainty can increase both supernatural belief and the tendency to perceive intention in patterns, with uncertainty helping mediate that movement from ambiguity to agency. That matters because much spiritualized platform content is not merely suggestive; it is agentic. It does not say only, this may resonate. It says, this came to you for a reason. It attributes delivery itself to purpose. Once that frame is accepted, even banal recommendation logic can be experienced as evidence of hidden design.

The darker edge of this psychology appears when teleology and threat begin to braid together. Recent work has shown that paranoid thinking and teleological thinking can contribute to distinct forms of socially loaded misperception, while broader paranoia research continues to emphasize how intensely suffering can shape the interpretation of others’ intentions. In plain language: if someone is already frightened, destabilized, or primed to suspect hidden harm, a stream of personalized, emotionally charged videos can become raw material for a self-sealing worldview. The feed does not need to create the vulnerability from nothing; it only needs to meet it, mirror it, and hand it language.

None of this means that meaning-making is a flaw, or that all spirituality is delusion by another name. It means something subtler and more tragic: the same faculties that allow human beings to make art, religion, intimacy, and moral life can be recruited by systems that were never designed with psychological reality in mind. Recommendation engines do not have to believe in omens to mass-produce their emotional texture. They only have to keep feeding the oldest hungers in the human mind: the hunger for pattern, the hunger for purpose, and the hunger to feel that what arrives has arrived for me.

That is why the experience can feel supernatural even when it is not. It is not because a ghost is inside the machine. It is because the machine has learned, with statistical patience, how to knock on doors evolution left unlocked.

The algorithm doesn’t have to be mystical to feel mystical

The trick, if we can call it that, is that nothing supernatural has to happen for an experience to feel supernatural. A recommendation system is not a prophet. It is a sorting machine with an appetite. Its work is mundane in principle: it studies the traces people leave behind and tries to predict what they are most likely to watch next, what they are most likely to enjoy, and what will keep them returning. YouTube’s own help materials say its recommendation system compares your viewing habits with similar users and relies on signals such as watch history, search history, subscriptions, likes, dislikes, “not interested” feedback, and satisfaction surveys. In the language of engineering, this is personalization. In the language of a lonely or frightened human nervous system, it can feel like recognition.

This is why the machine can wear the mask of fate without ever becoming fate. It does not need a soul; it only needs memory. If you pause on tarot clips, replay relationship readings, linger on videos about betrayal, click on signs-and-synchronicity content, search for spiritual warfare, or watch a few creators all speaking in the same register of intimate urgency, the system begins to braid those behaviors into a profile. It does not know why you watched. It does not know whether you were curious, joking, grieving, dissociating, researching, or spiraling. But it does know that this family of content has become sticky around your attention, and it narrows the world accordingly. The effect is a bit like walking through a forest where every tree has quietly learned your name and begun leaning inward.

What emerges on the screen, then, is not random at all. It is statistically tailored. And statistical tailoring has a peculiar emotional texture: it can feel more intimate than deliberate speech because it arrives without an obvious speaker. No friend texted it to you. No priest selected it. No therapist prescribed it. It simply appeared, as if out of the dark, carrying language that seems to fit the seam of your current life. That is where the enchantment enters. The algorithm has no incense, no robe, no candlelight. Its sacrament is timing. It catches a user at the precise intersection of recent behavior, accumulated preference, and current vulnerability, then serves up a message already lacquered in emotional relevance. To a meaning-making mind, that can feel less like targeting than destiny.

And because the system is optimized not just for relevance but for return, repetition matters immensely. The same emotional weather can be delivered again and again with slight variation: a new face, a new deck, a new title, the same undertow. YouTube itself has acknowledged a version of this mechanism in its teen-safety policies, noting that some categories of videos may be innocuous in a single viewing but become problematic when watched repeatedly. That admission is more important than it first appears. It means the platform recognizes that recommendation is not merely a mirror of interest; it is also a sequencing force, a curator of recurrence. A single whisper may be harmless. A hundred whispers, arranged by a machine that keeps learning which whisper you will not resist, can begin to sound like revelation.

The emotional charge of the content sharpens the illusion. A vaguely personalized cooking video does not usually feel enchanted. But a video promising that hidden enemies are being exposed, that sudden wealth is inbound, that your intuition was right all along, that your stalkers are failing, that your pain has cosmic meaning, that a healer has been sent to your feed, or that the appearance of the message itself is proof of its truth, this is different material. It already arrives with symbolic voltage. When such messages are repetitive, high-arousal, and broad enough to fit many lives while sounding specific enough to fit yours, the system does not need to prove anything. It only needs to keep placing emotionally resonant cards on the table until the user mistakes sequence for significance.

The creators, meanwhile, often complete the circuit. Research on algorithmic conspirituality content shows that some creators explicitly work within this enchanted frame, treating the platform’s distribution not as a neutral mechanical process but as part of the meaning of the message itself. The recommendation becomes folded into the reading: whoever needs to see it will see it. In this move, the creator does something subtle and powerful. They do not merely offer interpretation of a card spread, a channeled intuition, or a divinatory theme; they interpret the delivery mechanism as further evidence. The algorithm is no longer the road by which the message arrived. It becomes a witness for the message’s truth. That is how infrastructure becomes liturgy.

So the platform does not have to be mystical to feel mystical. It only has to do three things very well: remember what draws us in, repeat it with enough variation to avoid boredom, and hand it back to us in moments when the mind is most tempted to read recurrence as intention. The creators add the final consecration by saying, in effect, this did not merely reach you; it chose you. And once that sentence has been spoken often enough, the user may stop seeing a recommendation engine altogether. What they see instead is an oracle hidden inside the feed, a machine that never claims to be divine, but is increasingly experienced as if it were.

The creator layer: personalization as performance

If the algorithm is the stage machinery, the creator is the magician who steps into the light and tells the audience what the machinery means. That distinction matters. A recommendation system may deliver the video, but it is often the creator who consecrates the delivery. The clip does not simply begin with a tarot spread, a warning, or a soothing prophecy. It begins with an interpretation of arrival itself: whoever needs to see this will see it; the algorithm brought you here for a reason; your guides wanted this in your feed. In that moment, the creator performs a subtle but powerful conversion. Distribution becomes destiny. Recommendation becomes selection. The platform’s hidden ranking logic is transfigured into a spiritual event. What was, at the technical level, a probabilistic act of targeting is reintroduced to the viewer as a form of recognition. The road is not merely traveled; it is blessed.

This is why the creator cannot be treated as a passive passenger inside algorithmic culture. Recent research on TikTok creators of algorithmic conspirituality found that creators’ own interpretations of the For You Page do not merely coexist with platform behavior; they actively interact with it, shaping the kinds of spiritual and relational themes those creators produce. In other words, the belief structure is not an accidental frosting spread over neutral content after the fact. It becomes part of the content’s architecture. The creator learns how to speak with the platform’s aura of uncanny relevance, and sometimes through it, until the recommendation itself becomes a rhetorical instrument. The feed does not just deliver the oracle; the feed is folded into the oracle’s authority.

There is labor in this, and not only the ordinary labor of filming, editing, titling, and posting. There is a more intimate labor as well: the labor of holding other people’s projections, fears, hope, grief, and hunger for signs. The same creator study found that this genre generates significant emotional and affective labor for creators because of the complex relational dynamics the content invites. Once a creator tells viewers that the message has “found” them, the creator is no longer just an entertainer or commentator. They are cast, however reluctantly, into the role of witness, interpreter, confessor, medium, and sometimes shepherd. The audience does not merely consume; it attaches. The channel becomes less like a publication and more like a lantern in a storm, and the person holding it becomes newly burdened by what gathers beneath the light.

This is one of the reasons the content can become socially sticky so quickly. On platforms like TikTok, spiritual creators are not only distributing beliefs; they are building atmospheres. Research on WitchTok shows that creators use short-form video not simply to display practice but to create community, inviting viewers into shared rituals, shared vocabularies, shared objects, and shared interpretations of what the technology itself is doing. A related study on algorithmic divination goes further, describing practitioners who treat the app not as a neutral conduit but as a powerful partner in readings, channeling, and connection, as if the software had become a ritual instrument in its own right. The creator, in that ecology, is part guide and part stage director, teaching viewers how to interpret not only the cards, crystals, symbols, or warnings, but the platform’s very act of surfacing them.

And because no digital liturgy stays untouched by economics for long, another current enters the water: incentive. WitchTok research has documented how this space is entangled with promotion, product recommendation, personal branding, and consumer capitalism. That does not mean every creator is cynical or fraudulent; many are plainly sincere. But sincerity does not cancel structure. A creator can authentically believe in their practice and still be shaped by the platform’s incentives toward charisma, recurrence, emotional intensity, and community capture. The same system that rewards intimacy also rewards mystique. The same system that rewards trust can reward dependency. The same system that says find your people can quietly encourage the construction of a small cosmology with one especially luminous sun at its center.

So the creator layer is not a decorative flourish added atop an already enchanted feed. It is one of the engines of enchantment. The platform offers the conditions of opacity, repetition, and relevance; the creator supplies narrative, tone, permission, and mythic framing. Together they produce something more potent than personalization alone: a performance of chosenness. And once a viewer begins to feel chosen, the distance between content and calling can collapse with remarkable speed. The creator does not have to say, I am your authority. It is often enough to say, softly, you were meant to hear this.

Where the harm begins

The harm does not usually begin with belief itself. It begins earlier, in a softer place: when suggestion starts to displace judgment. A person encounters a video that promises incoming wealth, divine vindication, enemies already defeated, healing already underway. None of these claims need be precise to do their work. In fact, their vagueness is part of their efficiency. They offer the emotional reward of certainty without the burden of evidence. For many viewers, this may remain a passing comfort, no more consequential than a horoscope. But research on algorithmic conspirituality suggests the phenomenon becomes riskier when platforms normalize personally meaningful messaging in domains already tied to vulnerability and behavior change, including mental health, ADHD, smoking, and body dysmorphia. The issue is not simply that people see spiritualized content. It is that repeated, personalized, “this is for you” framing can make tenuous claims feel actionable.

That is where false hope becomes more than an aesthetic problem. Hope is ordinarily tethered to reality by friction: by uncertainty, by time, by consultation, by evidence, by the difficult dignity of not knowing yet. But algorithmic readings often remove that friction. They tell the viewer not merely that good things are possible, but that they are already in motion, already decreed, already targeted toward you. In a healthier culture of care, such messages would remain poetic. In a recommendation environment, they can become behavioral cues. And the broader platform ecology gives reason for concern. In one large content analysis of TikTok mental-health videos, 33% of advice and information videos were judged misleading, and the misleading content tended to perform better. In a later analysis of German-language TikTok mental-health content, more than half of videos were incorrect or overgeneralized. This does not mean every viewer is deceived. It means the informational water table is already contaminated, and emotionally compelling misinformation often travels farther than careful truth.

False threat is more dangerous still. When creators speak of phone tapping, gang stalking, hidden enemies, bank fraud, curses, surveillance, or attempted murder, they are not merely offering a dramatic style. They are furnishing a vocabulary of interpretation. Most viewers will hear such language metaphorically, theatrically, or dismiss it altogether. But not all viewers arrive in the feed with the same degree of interpretive stability. Recent psychiatric research shows that technology and social platforms are now appearing directly inside delusional systems: in one modern cohort, more than half of subjects with psychotic disorders described technology-related delusions, including beliefs that YouTube or social media were sending them special messages or responding to their thoughts. That research does not prove that spiritual videos create psychosis. It does show that contemporary digital systems have become raw material for persecutory and referential belief, which means creators and platforms should be especially cautious about repeatedly framing algorithmic delivery as personal revelation or warning.

There is also a quieter harm, one less cinematic but perhaps more common: dependency. The creator begins as entertainer, reader, guide, or channel. But the structure of the content encourages a more intimate role. If the message found you for a reason, then the person delivering it starts to feel less like a broadcaster and more like a custodian of hidden truth. Research on creators of algorithmic conspirituality content suggests that this ecosystem is shaped not only by platform affordances but by creator labor and community formation: creators’ beliefs interact with platform mediation to reinforce spiritual and relational themes, and the resulting audience relationships produce significant emotional and affective labor because of their complexity. In plain terms, the bond can become sticky. The creator is not simply watched; they are leaned on. That does not make every creator manipulative. It does mean the medium itself is structurally hospitable to quasi-pastoral dependence, especially when community forms around the idea that one person, or one recurring feed persona, has special access to the viewer’s unseen reality.

The most serious cost may be the erosion of help-seeking. Once algorithmic readings begin to function as diagnosis, warning system, or treatment plan, grounded reality checks lose their authority. The issue here is not spirituality versus science in some crude opposition. It is substitution. A person who might once have asked, Should I call my bank? Should I speak to a friend? Should I see a clinician? Should I document this? Should I sleep before deciding what this means? now receives a more intoxicating script: You already know. The signs have confirmed it. This found you for a reason. There is empirical reason to worry about that substitution effect. In a study of ADHD misinformation on TikTok, misleading content not only reduced knowledge; it also increased confidence in that mistaken knowledge and increased intentions to seek both evidence-based and non-evidence-based treatment. That is an especially chilling combination: lower accuracy, higher confidence, and action without calibration.

This is why the harm in question should not be framed as “people online believe odd things.” That is too shallow, and too smug. The deeper problem is infrastructural. A recommendation system optimized for relevance, creators rewarded for intimacy and recurrence, and viewers under conditions of uncertainty together form a machine for manufacturing false coherence: a persuasive sense that emotionally charged, weakly evidenced content is not merely resonant, but reliable. Public-health scholars have already warned that health misinformation on social media is extremely prevalent and especially harmful for adolescents and marginalized groups, precisely because these platforms are now major routes through which health beliefs are formed. In that light, the occult rhetoric of destiny is not just a curiosity at the edge of the internet. It is a delivery system, sometimes for comfort, sometimes for confusion, and sometimes for beliefs that lead people away from care and deeper into the labyrinth of signs.

Vulnerable users and false coherence

By false coherence, I mean something simple, though its effects are not simple at all: a system that feels meaningful, ordered, and explanatory, but is actually built from personalization, repetition, and selective interpretation rather than verified reality. It is a little like mistaking the echo in a canyon for a second voice. The sound is real. The timing is real. The emotional impact is real. But the intelligence one imagines behind it may not be there. In algorithmic environments, that mistaken surplus of meaning can become especially persuasive when creators explicitly frame discovery itself as destiny, and when the platform repeatedly confirms the user’s private suspicions with new variations of the same theme. Research on algorithmic conspirituality is useful here because it names exactly this fusion of personalization, spiritual significance, and conspiratorial or self-referential meaning.

Adolescents are one group for whom this false coherence may carry special force, not because they are naïve, but because adolescence is already a period in which identity, belonging, and reality-testing are under rapid construction. The U.S. Surgeon General’s 2023 advisory says plainly that we cannot conclude social media is sufficiently safe for children and adolescents, and warns that repeated exposure to certain forms of content can be harmful, especially in a developmental environment already strained by a broader youth mental-health crisis. When a young person is repeatedly told that a message was “meant” for them, the risk is not merely that they believe something odd. The deeper risk is that platform-mediated suggestion begins to rival ordinary sources of reality-checking—friends, family, teachers, and time itself.

Lonely users may also be especially susceptible to this structure of seduction. Loneliness lowers the threshold at which recognition feels salvific. A person who feels unseen in ordinary life may experience a vague but intimate video as a kind of rescue: not because the content is accurate, but because it arrives wearing the shape of attention. Here the creator’s rhetoric matters enormously. Whoever needed to hear this did. Your guides brought this here. This found you. These phrases do not simply interpret the content; they simulate relationship. And we should be careful not to overstate causation. Still, recent research found that algorithmic conspirituality can normalize behavior-change messaging in vulnerable domains like mental health and body image, while a 2025 meta-analysis among students found that problematic social-media use was positively associated with loneliness, anxiety, depression, and fear of missing out. The feed does not create all such vulnerability from nothing. But it can learn how to speak in its accent.

People under acute stress are vulnerable in a different way. Stress narrows time, compresses nuance, and makes random events feel charged. When life is chaotic, the mind becomes hungrier for shape. That is not a moral failure; it is one of the oldest adaptive reflexes we have. Experimental work has shown that when people feel a loss of control, they become more likely to perceive illusory patterns, to see coherent relationships where stimuli are actually random or unrelated. Related research links illusory pattern perception to both conspiracy beliefs and supernatural beliefs. In such a state, a recommendation system does not have to provide proof. It only has to provide recurrence. The same idea appearing three times in one evening can begin to feel less like probability than instruction.

The most delicate category to discuss is users with paranoia-prone or psychosis-prone cognition, and care is essential here. The claim is not that spiritual content or recommender systems create psychosis in any simple or universal sense. The claim is narrower and more defensible: that digital systems increasingly furnish the imagery, grammar, and explanatory material out of which delusional frameworks may be built. A 2025 study in The British Journal of Psychiatry found that among adults with psychotic disorders in treatment, over half described technology-related delusions, including fears involving hacked devices, hidden surveillance, social media, and beliefs that online systems were responding specifically to them; technology-delusion prevalence also increased over time in the cohort. In other words, the platform is not merely a backdrop. For some vulnerable users, it becomes part of the delusional architecture itself. That is precisely why phrases like “the algorithm brought you here for a reason” deserve more scrutiny than they currently receive.

What all these groups share is not weakness, but reduced margin for ambiguity. Adolescents are still building interpretive scaffolding. Lonely users are often starved for recognition. Stressed users are more likely to trade uncertainty for pattern. Paranoia-prone or psychosis-prone users may already be navigating a reality in which self-reference, hidden intention, and technological mediation are unusually charged. For each of them, false coherence offers the same counterfeit gift: relief in the form of explanation. It says, This is not random. This is about you. The world has become legible at last. And because that promise is affectively potent before it is evidentially sound, it can feel like care even when it is only capture.

Actual harm in evidence

It is important, here, not to become melodramatic. The evidence does not support a crude fairy tale in which every scroll leads straight to madness, nor a smug counter-fable in which recommendation systems are harmless mirrors and all concern is moral panic. What the literature shows instead is more unsettling and more useful: social-media effects are patterned, not uniform. A recent systematic review and meta-analysis found that ordinary social media use had a small but significant association with depression and anxiety in young people, while problematic social media use was associated with depression, anxiety, poorer wellbeing, and sleep problems. In other words, the platform is not a single poison; the risk grows with the style of use, the compulsiveness of the loop, and the vulnerabilities the loop meets.

That nuance matters because the more specific studies are even more revealing. In one experience-sampling study of people with and without psychosis, social-media use predicted low mood, though it did not, on its own, predict paranoia across the sample. But the details are what matter: posting about feelings and venting predicted low mood, low self-esteem, and high paranoia, while other activities, such as viewing newsfeeds, were associated in that study with reductions in negative affect and paranoia. A more recent ecological momentary assessment in people with schizophrenia-spectrum or bipolar disorders likewise found that social-media use was associated with a modest decrease in subsequent paranoia, not an increase. Taken together, these findings point to a sober conclusion: the harm is not reducible to exposure alone. It depends on content, context, use pattern, and user state. That is exactly why algorithmic framing and creator rhetoric matter so much; they shape not just what is seen, but how it is metabolized.

Still, there are domains where the risk is sharper and less ambiguous. The scholarship on algorithmic conspirituality explicitly links this form of platform-mediated “this found you for a reason” meaning-making to the normalization of persuasive messaging in areas such as mental health, ADHD, smoking, and body dysmorphia, with potential for both positive and negative health outcomes. That is a critical point. The danger is not only false spiritual intensity; it is the quiet migration of authority. Once a feed begins to feel like a diagnostic instrument, a prophecy machine, or a healing channel, it can become easier for users to treat emotionally resonant content as guidance rather than suggestion. And the broader literature on mental-health misinformation online has been warning for some time that misleading content is common, compelling, and often better rewarded by platform dynamics than careful truth.

The psychiatric evidence is especially sobering. A 2025 cohort analysis of technology delusions found that around a quarter of those with such delusions incorporated social media, with Instagram, YouTube, Facebook, X, Reddit, Snapchat, and Spotify all appearing in the material. The reported beliefs are not vague: one subject believed he was receiving “special messages” in Spotify playlists; another believed Facebook profile photos and posts were encoded messages to her; one reported receiving messages from God through YouTube videos and said the messages would “pop up with my exact thought”; another thought Instagram was “responding to her thoughts in real time.” The same paper reports cases involving fears of hacked phones, tracking devices, hidden cameras, implanted chips, and electronic monitoring. This does not mean platforms create psychosis out of whole cloth. It does mean that platforms and their signals are now being woven directly into delusional frameworks, especially in vulnerable users.

Youth-health authorities have, in effect, begun to say that waiting for perfect certainty is no longer a responsible posture. The U.S. Surgeon General’s Advisory states plainly that we cannot conclude social media is sufficiently safe for children and adolescents. It also notes that children and adolescents who spend more than three hours a day on social media face double the risk of mental-health problems including depression and anxiety, and that 46% of adolescents aged 13 to 17 say social media makes them feel worse about their body image. Even more telling, YouTube’s own youth-safety framework now rests on the principle that some categories of content may be innocuous in a single viewing but problematic in repetition, especially for teens. That is very close to the central argument of this essay: harm may arise not only from the message itself, but from its repeated, personalized delivery until it begins to masquerade as truth.

So the harm in evidence is not best described as a sudden fall into delusion. It is more often a sequence of smaller permissions. A person is taught to over-trust resonance. Then to treat recurrence as significance. Then to accept delivery as destiny. Then to substitute emotionally vivid content for slower forms of verification, consultation, and care. By the time something more dangerous appears, persecution narratives, coded-message beliefs, grandiose healing claims, enemies everywhere, the structure of belief has already been built. The house is standing before the viewer realizes what kind of ground it rests on.

Why this is also a system-stability problem

At a certain point, the question stops being merely therapeutic and becomes civic. A feed is not just a mirror held up to one isolated psyche; it is part of the atmosphere in which a society decides what is real together. Human beings do not live by private cognition alone. We require some minimum shared reality, some workable overlap in what we take to be happening, what counts as evidence, and which signals deserve trust. Social psychologists describe this shared reality as a basic human need: people are motivated not only to feel connected to others but to feel that their understanding of the world has at least some common ground with them. When personalization intensifies to the point that each person is sorted into increasingly private interpretive weather, that common ground begins to thin. Scholars of online targeting call this epistemic fragmentation: a condition in which people no longer have access to enough shared context to see what others are being shown, told, or nudged to believe. At that point, public reasoning becomes structurally harder, because citizens are no longer arguing from a common floor but from adjacent, customized worlds.

That is why the danger here cannot be contained within the language of “screen time” or “content moderation” alone. If enough people are nudged into personalized ecologies of omen, persecution, destiny, or grand reward, the damage is not only that some individuals may suffer. The deeper damage is that algorithmic selection can begin to feel more trustworthy than shared evidence. A person no longer asks, What do we know? They ask, Why did this arrive for me? The shift is subtle, but civilizationally important. It replaces public verification with private significance. It turns common reality into a patchwork of algorithmically curated revelations. And as scholars of epistemic fragmentation warn, even sophisticated individuals remain vulnerable when they are isolated inside such targeting regimes; the problem is not merely gullibility but the structural weakening of the public forum itself.

YouTube, to be fair, does not present a single simple story. The evidence is mixed, and we should say so plainly. A causal study using “counterfactual bots” found that, on average, relying on YouTube’s recommender produced less partisan consumption, especially among already heavy partisan consumers, a finding that cuts against the cruder version of the “algorithm radicalizes everyone” thesis. But a more recent large-scale PNAS audit found something equally important: YouTube recommendations tended to be ideologically congenial, and for right-leaning users that congeniality increased deeper in the recommendation trail. The same audit did not find large substantive increases in ideological extremity, yet it did find that a growing share of recommendations deeper in the trail came from extremist, conspiratorial, and otherwise problematic channels, with the pattern most pronounced among right-leaning users. A systematic review of the broader literature reached a similarly cautious conclusion: the evidence is not uniform, but a substantial portion of studies do implicate the recommender in facilitating pathways to problematic content, while others find mixed or null results. The responsible conclusion is therefore neither complacency nor panic. It is that recommender effects are contingent, but real enough to matter, especially once repetition, vulnerability, and creator framing are added to the equation.

That contingency is precisely what makes this a system-stability issue rather than a morality play. Recommendation systems do not have to drive every user toward the same abyss to destabilize a culture. They only have to sort enough people into increasingly self-sealing pockets of meaning that the status of evidence itself begins to fracture. One person is drawn into a devotional certainty that healing has already occurred; another into a certainty that enemies are closing in; another into a conspiracy that hidden powers are communicating through the feed; another into a politics that feels less like an argument than a stream of signs. None of these realities need be universal to be corrosive. They only need to become persuasive enough, and numerous enough, that common adjudication weakens. At that point the platform is no longer merely recommending videos. It is helping allocate ontologies. And when a society outsources ontology to engagement-optimized systems, it does not simply become misinformed. It becomes harder for that society to remember what a shared world feels like.

What this article is not saying

It is worth pausing here, before the argument hardens into caricature, to say plainly what this essay is not doing. It is not declaring war on tarot, divination, Wicca, witchcraft, or spiritual practice as such. That would be a failure of both scholarship and imagination. The digital spiritual world is not one thing. Research on WitchTok describes a genuine, diverse subculture in which practitioners share rituals, symbols, objects, and forms of community; work on algorithmic divination likewise examines how some users treat platforms as active tools or partners in spiritual practice. To say that these spaces exist, and matter to people, is not to pathologize them. The critique here is narrower: not spirituality in itself, but a specific platform-mediated structure in which personalization, opacity, and rhetorical framing can make content feel more evidential than it is.

It is also not claiming that recommendation algorithms cause psychosis de novo in the general population. That would go beyond the evidence. The psychiatric literature discussed earlier is more careful than that. The 2025 Cambridge cohort study examined people already experiencing psychotic disorders and documented how internet platforms, devices, Wi‑Fi, and social media increasingly appear inside delusional frameworks; it does not show that recommender systems independently generate psychosis in otherwise unaffected people. The warning is subtler and, in some ways, more disturbing: digital systems are becoming part of the symbolic furniture through which vulnerable minds organize persecution, reference, and revelation. That is different from saying the systems create those minds from nothing.

What this article is arguing is more restrained, and more defensible. It is arguing that opaque personalization plus creator rhetoric can intensify false meaning attribution and become unsafe for some users. The recommendation engine learns from attention signals; creators then sometimes narrate that recommendation as fate, guidance, or proof of relevance. Researchers studying algorithmic conspirituality define the phenomenon precisely in these terms: users come to believe that algorithms know them intimately and deliver personally meaningful messages at just the right moment, and that structure can normalize persuasive messaging in sensitive domains such as mental health, ADHD, smoking, and body dysmorphia. Related creator research shows that the belief structure is not merely an audience hallucination imposed on neutral content; creators themselves sometimes work with that frame, reinforcing spiritual or relational themes shaped by the platform’s mediation.

So the claim is not that all spiritual media is harmful, nor that the feed is a demonic machine birthing madness. The claim is that, for some users, a recommendation architecture built to maximize relevance can become an engine of false coherence when it is paired with emotionally charged, vaguely personalized, spiritually or conspiratorially framed content. That risk is heightened for youth and other vulnerable users; the U.S. Surgeon General has explicitly said we cannot conclude social media is sufficiently safe for children and adolescents, and both public-health guidance and platform policy now acknowledge that repeated exposure itself can be part of the harm profile. The point, then, is neither ridicule nor panic. It is proportion: to name a genuine design risk without turning it into a metaphysical crusade of our own.

What platforms should do

The answer is not censorship by reflex, nor a puritanical war on ambiguity, symbol, or spiritual language. It is something more modest and more humane: platforms should stop pretending that recommendation is neutral when they already know repetition can become a risk factor. YouTube has said as much in its own youth-safety work, describing categories of videos that may be innocuous in a single viewing but problematic when repeatedly recommended, and it has already built recommendation safeguards for some of those patterns. That principle should be widened beyond the narrow domains it currently covers. If a platform can recognize that repeated exposure to idealized body imagery, social aggression, or dubious financial advice can distort belief and behavior, then it can also recognize the danger of repeated “meant for you” content when that content is bundled with fear, persecution, miracle claims, or destiny rhetoric.

So the first obligation is architectural, not moralistic: down-rank the repeated delivery of vague chosenness claims when they are paired with crisis themes. A single tarot reading or channeling video is one thing; an escalating stream of videos implying that the viewer is being watched, cursed, stalked, targeted, secretly selected, or cosmically guaranteed wealth is another. The design target should not be belief itself, but the dangerous fusion of three elements: personalization, repetition, and high-arousal claims. The platform need not decide whether a creator’s cosmology is true. It only needs to recognize when recommendation is helping convert ambiguity into false certainty. YouTube already has a policy logic for this: some content may be acceptable in isolation but unsafe in repetitive sequencing. That same design logic should be extended to algorithmically intensified omen-culture.

Second, platforms should add friction and context where rhetoric drifts into domains that can affect health, safety, money, or persecutory interpretation. Not a blunt warning slapped on everything uncanny, but carefully targeted context layers around content that makes quasi-diagnostic claims about mental state, promises supernatural healing, frames ordinary events as coordinated attacks, or foretells imminent financial rescue. YouTube has already moved in this direction elsewhere: it now surfaces age-appropriate mental-health and wellbeing content from trusted sources for teens, and it has deployed crisis-support resource panels in some regions. Those are precedents worth building on. A user who is being fed a cluster of videos about surveillance, gang stalking, curses, miracle recoveries, or instant wealth should encounter at least a moment of epistemic drag, a pause in which the platform says, in effect: before you make this part of your reality, here are grounded resources, here is context, here is another way to interpret what you’re seeing.

Third, users need better control over the feedback loops that slowly teach the platform which private weather to keep returning. YouTube’s own help pages already acknowledge that watch history, search history, subscriptions, likes, dislikes, and “Not interested” feedback shape recommendations, and the platform gives users tools to delete or pause watch history, remove past searches, mark videos as “Not interested,” or block a channel from future recommendations. Those controls are valuable, but they remain too hidden and too reactive. If someone begins noticing that the feed has turned uncanny, that it is spiraling into threat, grandiosity, or obsessive patterning, the platform should make escape routes obvious: reset this recommendation cluster, stop using this topic for future suggestions, break this sequence, show me fewer videos like this, explain why I am being shown this now. At present, the machine often feels like a maze with only decorative exits. A safer design would make agency legible before people have to go looking for it.

Finally, when content clusters around paranoia, self-harm, stalking, fraud, or other high-risk themes, the platform should treat that clustering as a signal, not just a series of isolated views. The individual video is not always the unit that matters most. Sometimes the real risk is the sequence: one clip that names hidden enemies, another that confirms surveillance, another that promises retribution, another that claims the feed itself is divinely arranged. By the time the user reaches the fourth or fifth iteration, the platform has already become part of the story. That is exactly why YouTube’s teen-safety precedent matters so much: it admits that repetition itself can be harmful, and that design should respond accordingly. The same wisdom should govern the stranger corners of the feed. Not because every mystical claim is dangerous, but because a recommendation engine should never be allowed to quietly impersonate an oracle where fragile minds are listening.

Closing Thoughts

There was a time when omens belonged to the edges of the world. They lived in smoke, in weather, in the flight of birds, in the mutter of priests, in the careful turning of cards on a cloth-covered table. They came wrapped in distance. A person had to travel to them, or wait for them, or make a ritual of asking. That distance mattered. It gave the soul room to hesitate. It allowed mystery to remain visibly mysterious.

Now the omen arrives frictionless. It slips into the feed between comedy, recipes, politics, and music. It wears the ordinary clothes of recommendation, but speaks in the grammar of fate. It does not ask to be sought. It seeks. It finds the tired user, the grieving user, the lonely user, the frightened user, the user already half-convinced that the world is trying to say something through static. And then it whispers with perfect confidence: this was meant for you.

That is the new enchantment, not the return of magic, but the industrialization of its emotional texture. A platform optimized for relevance becomes, in the right rhetorical hands, a theater of revelation. A system built to predict attention begins to feel like a system built to disclose hidden truth. The machine never has to lie in the old-fashioned sense. It only has to repeat, refine, and arrive on time. Human pattern-hunger does the rest. The mind, so ancient in its need for meaning, finishes the spell the software began.

Thus, the danger is not merely that some users may believe odd things. The deeper danger is that a culture accustomed to this rhythm may slowly lose the habit of asking the older, harder questions: How do I know? What else could explain this? Who benefits if I interpret this as destiny? What evidence survives outside the mood of the moment? Once those questions weaken, skepticism is not defeated by argument. It is seduced into sleep.

The danger is not that the algorithm is secretly magical. The danger is that it can feel magical enough to bypass skepticism, especially when the feed arrives wrapped in intimacy, destiny, and fear.


Works Cited

Ahmed, Oli, et al. “Social Media Use, Mental Health and Sleep: A Systematic Review with Meta-Analyses.” Journal of Affective Disorders, vol. 367, 2024, pp. 701–712. Elsevier, https://doi.org/10.1016/j.jad.2024.08.193.

Berry, N., et al. “Social Media and Its Relationship with Mood, Self-Esteem and Paranoia in Psychosis.” Acta Psychiatrica Scandinavica, vol. 138, no. 6, 2018, pp. 558–570. Wiley, https://doi.org/10.1111/acps.12953. ‍

Burns, Alaina V., et al. “‘The Algorithm Is Hacked’: Analysis of Technology Delusions in a Modern-Day Cohort.” The British Journal of Psychiatry, 2025, pp. 1–5. Cambridge UP, https://doi.org/10.1192/bjp.2025.10452.

Castiello, Santiago, et al. “Paranoid and Teleological Thinking Give Rise to Distinct Social Hallucinations in Vision.” Communications Psychology, vol. 2, article 117, 2024. Nature, https://doi.org/10.1038/s44271-024-00163-9.

De, Ankolika, et al. “‘Whoever Needs to See It, Will See It’: Motivations and Labor of Creating Algorithmic Conspirituality Content on TikTok.” Proceedings of the ACM on Human-Computer Interaction, vol. 9, no. 7, article CSCW266, 2025. ACM, https://doi.org/10.1145/3757447.

Goodrow, Cristos. “On YouTube’s Recommendation System.” YouTube Official Blog, 15 Sept. 2021, https://blog.youtube/inside-youtube/on-youtubes-recommendation-system/. Accessed 5 Apr. 2026.

Haroon, Muhammad, et al. “Auditing YouTube’s Recommendation System for Ideologically Congenial, Extreme, and Problematic Recommendations.” Proceedings of the National Academy of Sciences, vol. 120, no. 50, 2023, article e2213020120. https://doi.org/10.1073/pnas.2213020120. ‍

Higgins, E. Tory, Maya Rossignac-Milon, and Gerald Echterhoff. “Shared Reality: From Sharing-Is-Believing to Merging Minds.” Current Directions in Psychological Science, vol. 30, no. 2, 2021, pp. 103–110. https://doi.org/10.1177/0963721421992027.

Kanthawala, Shaheen, et al. “Algorithmic Conspirituality: Explicating Its Emergence, Dimensions, and Persuasibility.” New Media & Society, vol. 27, no. 5, 2025, pp. 2885–2910. SAGE, https://doi.org/10.1177/14614448231217425.

Miller, Chris. “How Modern Witches Enchant TikTok: Intersections of Digital, Consumer, and Material Culture(s) on #WitchTok.” Religions, vol. 13, no. 2, 2022, article 118. MDPI, https://doi.org/10.3390/rel13020118.

Milano, Silvia, Brent Mittelstadt, Sandra Wachter, and Christopher Russell. “Epistemic Fragmentation Poses a Threat to the Governance of Online Targeting.” Nature Machine Intelligence, vol. 3, 2021, pp. 466–472. https://doi.org/10.1038/s42256-021-00358-3.

Mross, Aaron L., et al. “Insufficient Quality of Mental Health Information on German-Speaking TikTok: A Content Analysis.” Clinical Practice and Epidemiology in Mental Health, vol. 8, no. 1, 2026, article e17279. https://doi.org/10.32872/cpe.17279.

Office of the Surgeon General. Social Media and Youth Mental Health: The U.S. Surgeon General’s Advisory. U.S. Department of Health and Human Services, 2023, https://www.ncbi.nlm.nih.gov/books/NBK594757/. Accessed 5 Apr. 2026.

Ongchoco, Joan Danielle K., Santiago Castiello, and Philip R. Corlett. “Excessive Teleological Thinking Is Driven by Aberrant Associations and Not by Failure of Reasoning.” iScience, vol. 26, no. 9, 2023, article 107643. https://doi.org/10.1016/j.isci.2023.107643.

Scott, Matthew J. “Reasons Things Happen for a Reason: An Integrative Theory of Teleology.” Perspectives on Psychological Science, vol. 17, no. 2, 2022, pp. 452–464. SAGE, https://doi.org/10.1177/1745691621995753.

Schiros, Ashley, Nick Bowman, and Kevin Antshel. “Misinformation Mayhem: The Effects of TikTok Content on ADHD Knowledge, Stigma, and Treatment-Seeking Intentions.” European Child & Adolescent Psychiatry, vol. 34, no. 11, 2025, pp. 3521–3533. https://doi.org/10.1007/s00787-025-02769-8.

St. Lawrence, Emma. “The Algorithm Holy: TikTok, Technomancy, and the Rise of Algorithmic Divination.” Religions, vol. 15, no. 4, 2024, article 435. MDPI, https://doi.org/10.3390/rel15040435.

Turuba, Roxanne, et al. “Do You Have Depression? A Summative Content Analysis of Mental Health-Related Content on TikTok.” DIGITAL HEALTH, vol. 11, 2025. SAGE, https://doi.org/10.1177/20552076241297062.

Valdesolo, Piercarlo, and Jesse Graham. “Awe, Uncertainty, and Agency Detection.” Psychological Science, vol. 25, no. 1, 2014, pp. 170–178. SAGE, https://doi.org/10.1177/0956797613501884.

Vohs, Kathleen D. “Barnum Effect.” Encyclopaedia Britannica, 10 Mar. 2026, https://www.britannica.com/science/Barnum-Effect.

Whitson, Jennifer A., and Adam D. Galinsky. “Lacking Control Increases Illusory Pattern Perception.” Science, vol. 322, no. 5898, 2008, pp. 115–117. https://doi.org/10.1126/science.1159845.

Wilson, Riley E. Predictive Texts: Modern Mysticism and Algorithmic Divination. University of Michigan, Horace H. Rackham School of Graduate Studies, 2024. PhD dissertation. https://doi.org/10.7302/24040.

Yesilada, Muhsin, and Stephan Lewandowsky. “Systematic Review: YouTube Recommendations and Problematic Content.” Internet Policy Review, vol. 11, no. 1, 2022, article 1652. https://doi.org/10.14763/2022.1.1652.

Yeung, Anthony, Enoch Ng, and Elia Abi-Jaoude. “TikTok and Attention-Deficit/Hyperactivity Disorder: A Cross-Sectional Study of Social Media Content Quality.” The Canadian Journal of Psychiatry, vol. 67, no. 12, 2022, pp. 899–906. https://doi.org/10.1177/07067437221082854.

YouTube Help. “How YouTube Recommendations Work.” YouTube Help, n.d., https://support.google.com/youtube/answer/16089387?hl=en.

YouTube Help. “Manage Your Recommendations & Search Results.” YouTube Help, n.d., https://support.google.com/youtube/answer/6342839?hl=en.

YouTube Help. “View, Delete, or Turn On or Off Watch History.” YouTube Help, n.d., https://support.google.com/youtube/answer/95725?co=GENIE.Platform%3DAndroid&hl=en.

Graham, Garth. “New Mental Health & Wellbeing Tools – Built Just for Teens.” YouTube Official Blog, 14 Oct. 2025, https://blog.youtube/news-and-events/new-mental-health-wellbeing-tools-2025/.

Graham, Garth. “Safer Internet Day: Empowering Teens and Parents with Tools to Support Wellbeing.” YouTube Official Blog, 11 Feb. 2025, https://blog.youtube/inside-youtube/teen-mental-health-tools-2025/.

‍Graham, Garth, and James Beser. “Expanding Our Support for Teen Wellbeing on YouTube Across Europe.” YouTube Official Blog, 5 Sept. 2024, https://blog.youtube/inside-youtube/expanding-support-for-teen-wellbeing-europe/.

Graham, Garth, and James Beser. “Investing to Protect Teen Wellbeing on YouTube Across Europe and Globally.” YouTube Official Blog, 3 Mar. 2025, https://blog.youtube/inside-youtube/teen-wellbeing-on-youtube-europe/.

Next
Next

The Book as Weapon: Inside the Quiet Machinery of Luxury Publishing Power and the Epstein Influence