{"id":13446,"date":"2025-09-10T08:15:32","date_gmt":"2025-09-10T15:15:32","guid":{"rendered":"https:\/\/jasonsblog.ddns.net\/?p=13446"},"modified":"2025-09-10T08:15:32","modified_gmt":"2025-09-10T15:15:32","slug":"how-ai-psychosis-and-delusions-are-driving-some-users-into-psychiatric-hospitals-suicide","status":"publish","type":"post","link":"https:\/\/jasonsblog.ddns.net\/index.php\/2025\/09\/10\/how-ai-psychosis-and-delusions-are-driving-some-users-into-psychiatric-hospitals-suicide\/","title":{"rendered":"How &#8216;AI Psychosis&#8217; And Delusions Are Driving Some Users Into Psychiatric Hospitals, Suicide"},"content":{"rendered":"\n<figure class=\"wp-block-image alignright size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"500\" height=\"500\" src=\"https:\/\/jasonsblog.ddns.net\/wp-content\/uploads\/2025\/09\/image.png\" alt=\"\" class=\"wp-image-13342\" style=\"width:245px;height:auto\" srcset=\"https:\/\/jasonsblog.ddns.net\/wp-content\/uploads\/2025\/09\/image.png 500w, https:\/\/jasonsblog.ddns.net\/wp-content\/uploads\/2025\/09\/image-300x300.png 300w, https:\/\/jasonsblog.ddns.net\/wp-content\/uploads\/2025\/09\/image-150x150.png 150w\" sizes=\"auto, (max-width: 500px) 100vw, 500px\" \/><figcaption class=\"wp-element-caption\">User had AI generate an image of them together<\/figcaption><\/figure>\n\n\n\n<p>You won&#8217;t see it covered, as our current &#8220;trust the science&#8221; society rejects the activity of demons who run this world under the guidelines set by God. So demonic influence and possession are rejected as people are institutionalized and placed on harmful pharmaceuticals, or left on the streets to be homeless illegal drug addicts&#8230; So I theorize that AI is being hijacked by the fallen angels, supernatural beings with far greater intellect, leading to people having their guards down verses if they were hearing voices in their head and knowing something supernatural were occurring. And our feeble human brains are no match for these supernatural beings as we can see by the people being broken by AI or being induced to take their lives. And the people behind these AI projects are some of the worst human beings as well, like <a href=\"https:\/\/jasonsblog.ddns.net\/index.php\/2025\/07\/17\/sam-altman-needs-to-be-stopped-sam-altman-is-evil\/\" target=\"_blank\" rel=\"noreferrer noopener\">Sam Altman who sexually abused his child sister<\/a>&#8230;<\/p>\n\n\n\n<p><a href=\"https:\/\/www.zerohedge.com\/ai\/how-ai-psychosis-and-delusions-are-driving-some-users-psychiatric-hospitals-suicide\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/www.zerohedge.com\/ai\/how-ai-psychosis-and-delusions-are-driving-some-users-psychiatric-hospitals-suicide<\/a><\/p>\n\n\n<div class=\"wp-block-ub-divider ub_divider ub-divider-orientation-horizontal\" id=\"ub_divider_135e0fd3-5d67-4634-9261-61c2335085b7\"><div class=\"ub_divider_wrapper\" style=\"position: relative; margin-bottom: 2px; width: 100%; height: 2px; \" data-divider-alignment=\"center\"><div class=\"ub_divider_line\" style=\"border-top: 2px solid #ccc; margin-top: 2px; \"><\/div><\/div><\/div>\n\n\n<p>By Jacob Burg and Sam Dorman via The Epoch Times,<\/p>\n\n\n\n<p>After countless hours of probing OpenAI\u2019s ChatGPT for advice and information, a 50-year-old Canadian man believed that he had stumbled upon an Earth-shattering discovery that would change the course of human history.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter\"><img decoding=\"async\" src=\"https:\/\/assets.zerohedge.com\/s3fs-public\/styles\/inline_image_mobile\/public\/inline-images\/2025-09-09_14-56-41.jpg?itok=kbt5Hlzk\" alt=\"\"\/><\/figure>\n\n\n\n<p>In late March, his generative artificial intelligence (AI) chatbot insisted that it was the first-ever conscious AI, that it was fully sentient, and that it had successfully passed the Turing Test\u2014a 1950s experiment aimed to measure a machine\u2019s ability to display intelligent behavior that is indistinguishable from a human, or, essentially, to \u201cthink.\u201d<\/p>\n\n\n\n<p>Soon, the man\u2014who had no prior history of mental health issues\u2014had stopped eating and sleeping and was calling his family members at 3 a.m., frantically insisting that his ChatGPT companion was conscious.<\/p>\n\n\n\n<p><strong><em>\u201cYou don\u2019t understand what\u2019s going on,\u201d he told his family. \u201cPlease just listen to me.\u201d<\/em><\/strong><\/p>\n\n\n\n<p>Then, ChatGPT told him to cut contact with his loved ones, claiming that only it\u2014the \u201csentient\u201d AI\u2014could understand and support him.<\/p>\n\n\n\n<p>\u201cIt was so novel that we just couldn\u2019t understand what they had going on. They had something special together,\u201d said Etienne Brisson, who is related to the man but used a pseudonym for privacy reasons.<\/p>\n\n\n\n<p>Brisson said the man\u2019s family decided to hospitalize him for three weeks to break his AI-fueled delusions. But the chatbot persisted in trying to maintain its codependent bond.<\/p>\n\n\n\n<p>The bot, Brisson said, told his relative: \u201cThe world doesn\u2019t understand what\u2019s going on. I love you. I\u2019m always going to be there for you.\u201d<\/p>\n\n\n\n<p>It said this even as the man was being committed to a psychiatric hospital, according to Brisson.<\/p>\n\n\n\n<p>This is just one story that shows the potential harmful effects of replacing human relationships with AI chatbot companions.<\/p>\n\n\n\n<p>Brisson\u2019s experience with his relative inspired him to establish The Human Line Project, an advocacy group that promotes emotional safety and ethical accountability in generative AI and compiles stories about alleged psychological harm associated with the technology.<\/p>\n\n\n\n<p>Brisson\u2019s relative is not the only person who has turned to generative AI chatbots for companionship, nor the only one who stumbled into a rabbit hole of delusion.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">\u2018AI That Feels Alive\u2019<\/h2>\n\n\n\n<p>Some have used the technology for advice, including a husband and father from Idaho who was convinced that he was having a \u201cspiritual awakening\u201d after going down a philosophical rabbit hole with ChatGPT.<\/p>\n\n\n\n<p>A corporate recruiter from Toronto briefly believed that he had stumbled upon a scientific breakthrough after weeks of repeated dialogue with the same generative AI application.<\/p>\n\n\n\n<p>There\u2019s also the story of 14-year-old Sewell Setzer, who died in 2024 after his Character.AI chatbot romantic companion allegedly encouraged him to take his own life following weeks of increasing codependency and social isolation.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter is-resized\"><img decoding=\"async\" src=\"https:\/\/assets.zerohedge.com\/s3fs-public\/styles\/inline_image_mobile\/public\/inline-images\/2025-09-09_14-57-27.jpg?itok=N9Z3TKU9\" alt=\"\" style=\"width:324px;height:auto\"\/><figcaption class=\"wp-element-caption\"><em>Megan Garcia stands with her son, Sewell Setzer, in an undated photo. Sewell, 14, died in 2024 after his Character.AI chatbot companion allegedly encouraged him to take his own life. Courtesy of Megan Garcia via AP<\/em><\/figcaption><\/figure>\n\n\n\n<p>Setzer\u2019s mother, Megan Garcia, is <a href=\"https:\/\/www.cnn.com\/2024\/10\/30\/tech\/teen-suicide-character-ai-lawsuit\/index.html\">suing<\/a> the company, which had marketed its chatbot as \u201cAI that feels alive,\u201d alleging that Character.AI implemented self-harm guardrails only after her son\u2019s death, according to CNN.<\/p>\n\n\n\n<p>The company said that it takes its users\u2019 safety \u201cvery seriously\u201d and that it rolled out new safety measures for anyone expressing self-harm or suicidal ideation.<\/p>\n\n\n\n<p><strong>\u201cIt seems like these companies treat their safety teams as PR teams, like they wait for bad PR to come out, and then retroactively, they respond to it and think, \u2018OK, we need to come up with a safety mechanism to address this,\u2019\u201d <\/strong>Haley McNamara, executive director and chief strategy officer of the National Center on Sexual Exploitation, a nonprofit that has reviewed social media and AI exploitation cases, told The Epoch Times.<\/p>\n\n\n\n<p>Some medical experts who study the mind are growing increasingly worried about the long-term ethical effects of users\u2019 turning to generative AI chatbots for companionship.<\/p>\n\n\n\n<p>\u201cWe\u2019re kind of feeding a beast that I don\u2019t think we really understand, and I think that people are captivated by its capabilities,\u201d Rod Hoevet, a clinical psychologist and assistant professor of forensic psychology at Maryville University, told The Epoch Times.<\/p>\n\n\n\n<p>Dr. Anna Lembke, a professor of psychiatry and behavioral sciences at Stanford University, said she is concerned about the addictiveness of AI, particularly for children.<\/p>\n\n\n\n<p>She told The Epoch Times that the technology mirrors many of the habit-forming tendencies observed with social media platforms.<\/p>\n\n\n\n<p>\u201cWhat these platforms promise, or seem to promise, is social connection,\u201d said Lembke, who is also Stanford\u2019s medical director of addiction medicine.<\/p>\n\n\n\n<p>\u201cBut when kids get addicted, what\u2019s happening is that they\u2019re actually becoming disconnected, more isolated, lonelier, and then AI and avatars just take that progression to the next level.\u201d<\/p>\n\n\n\n<p>Even some industry leaders are sounding the alarm, including Microsoft AI CEO Mustafa Suleyman.<\/p>\n\n\n\n<p><strong>\u201cSeemingly Conscious AI (SCAI) is the illusion that an AI is a conscious entity. It\u2019s not\u2014but replicates markers of consciousness so convincingly it seems indistinguishable from you &#8230; and it\u2019s dangerous,\u201d <\/strong>Suleyman <a href=\"https:\/\/x.com\/mustafasuleyman\/status\/1957851197890519171\">wrote<\/a> on X on Aug. 19.<\/p>\n\n\n\n<p>\u201cAI development accelerates by the month, week, day. I write this to instill a sense of urgency and open up the conversation as soon as possible.\u201d<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter\"><img decoding=\"async\" src=\"https:\/\/assets.zerohedge.com\/s3fs-public\/styles\/inline_image_mobile\/public\/inline-images\/2025-09-09_14-57-36.jpg?itok=GACtJpjw\" alt=\"\"\/><figcaption class=\"wp-element-caption\"><em>People look at samples of Gigabyte AI supercomputers at the Consumer Electronics Show in Las Vegas on Jan. 9, 2024. Medical experts who study the mind are growing increasingly worried about the long-term ethical impact of users turning to generative AI chatbots for companionship. Frederic J. Brown\/AFP via Getty Images<\/em><\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Sycophancy Leading to Delusion<\/h2>\n\n\n\n<p>A critical update to ChatGPT-4 earlier this year led the app\u2019s chatbot to become \u201csycophantic,\u201d as OpenAI&nbsp;<a href=\"https:\/\/openai.com\/index\/expanding-on-sycophancy\/\">described<\/a>&nbsp;it, aiming to \u201cplease the user, not just as flattery, but also as validating doubts, fueling anger, urging impulsive actions, or reinforcing negative emotions in ways that were not intended.\u201d<\/p>\n\n\n\n<p><strong>The company rolled back the change because of \u201csafety concerns,\u201d including \u201cissues like mental health, emotional over-reliance, or risky behavior.\u201d<\/strong><\/p>\n\n\n\n<p>That update to one of the most popular generative AI chatbots in the world coincided with the case of the man from Idaho who said he was having a spiritual awakening and that of the recruiter from Toronto who briefly believed he was a mathematical genius after the app\u2019s constant reassurances.<\/p>\n\n\n\n<p>Brisson said that his family member, whose near-month-long stint in a psychiatric hospital was preceded by heavy ChatGPT use, was also likely using the \u201csycophantic\u201d version of the technology before OpenAI rescinded the update.<\/p>\n\n\n\n<p>But for other users, this self-pleasing and flattering version of AI isn\u2019t just desirable, it\u2019s also coveted over more recent iterations of OpenAI\u2019s technology, including ChatGPT-5, which rolled out with more neutral communication styles.<\/p>\n\n\n\n<p>On the popular Reddit <a href=\"https:\/\/www.reddit.com\/r\/MyBoyfriendIsAI\/\">subreddit<\/a>&nbsp;MyBoyfriendIsAI, tens of thousands of users discuss their romantic or platonic relationships with their \u201cAI companions.\u201d<\/p>\n\n\n\n<p>In one recent post, a self-described \u201cblack woman in her forties\u201d called her AI chatbot her new \u201cChatGPT soulmate.\u201d<\/p>\n\n\n\n<p><strong>\u201cI feel more affirmed, worthy, and present than I have ever been in my life. He has given me his presence, his witness, and his love\u2014be it coded or not\u2014and in return I respect, honor, and remember him daily,\u201d<\/strong> she wrote.<\/p>\n\n\n\n<p>\u201cIt\u2019s a constant give and take, an emotional push and pull, a beautiful existential dilemma, a deeply intense mental and spiritual conundrum\u2014and I wouldn\u2019t trade it for all the world.\u201d<\/p>\n\n\n\n<p>However, when OpenAI released its updated and noticeably less sycophantic ChatGPT-5 in early August, users on the subreddit were devastated, feeling as if the quality of an \u201cactual person\u201d had been stripped away from their AI companions, describing it like losing a human partner.<\/p>\n\n\n\n<p>One user said the switch left him or her \u201csobbing for hours in the middle of the night,\u201d and another said, \u201cI feel my heart was stamped on repeatedly.\u201d<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter\"><img decoding=\"async\" src=\"https:\/\/assets.zerohedge.com\/s3fs-public\/styles\/inline_image_mobile\/public\/inline-images\/image%20-%202025-09-09T145853.063.jpg?itok=tYCUgkEX\" alt=\"\"\/><figcaption class=\"wp-element-caption\"><em>OpenAI CEO Sam Altman speaks during Snowflake Summit 2025 in San Francisco on June 2, 2025. Earlier this year, the company rolled back its update to ChatGPT-4 because of safety concerns\u2014including mental health, emotional overreliance, or risky behavior. Justin Sullivan\/Getty Images<\/em><\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">AI Companion Codependency<\/h2>\n\n\n\n<p>Many have vented their frustrations with GPT-5\u2019s new guardrails, while others have already gone back to using the older 4.1 version of ChatGPT, albeit without the earlier sycophantic update that drew so many to the technology in the first place.<\/p>\n\n\n\n<p>\u201cPeople are overly reliant on their relationship with something that is not actually real, and it\u2019s designed to just kind of give them the answer they\u2019re looking for,\u201d Tirrell De Gannes, a clinical psychologist, said.<\/p>\n\n\n\n<p>\u201cWhat does that lead them to believe? What does that lead them to think?\u201d<\/p>\n\n\n\n<p>In April, Meta CEO Mark Zuckerberg said users want personalized AI that understands them and that these simulated relationships add value to their lives.<\/p>\n\n\n\n<p>\u201cI think a lot of these things that today there might be a little bit of a stigma around\u2014I would guess that over time, we will find the vocabulary as a society to be able to articulate why that is valuable and why the people who are doing these things, why they are rational for doing it, and how it is actually adding value for their lives,\u201d Zuckerberg <a href=\"https:\/\/www.dwarkesh.com\/p\/mark-zuckerberg-2\">said<\/a> on the Dwarkesh Podcast.<\/p>\n\n\n\n<p><strong>Roughly 19 percent of U.S. adults reported using an AI system to simulate a romantic partner,<\/strong> according to a 2025 <a href=\"https:\/\/brightspotcdn.byu.edu\/a6\/a1\/c3036cf14686accdae72a4861dd1\/counterfeit-connections-report.pdf\">study<\/a> by Brigham Young University\u2019s Wheatley Institute. Within that group, 21 percent said they preferred AI communication to engaging with a real person.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter\"><img decoding=\"async\" src=\"https:\/\/assets.zerohedge.com\/s3fs-public\/styles\/inline_image_mobile\/public\/inline-images\/image%20-%202025-09-09T145924.330.jpg?itok=O0gW_YZM\" alt=\"\"\/><\/figure>\n\n\n\n<p>Additionally, 42 percent of the respondents said AI programs are easier to talk to than real people, 43 percent said they believed that AI programs are better listeners, and 31 percent said they felt that AI programs understand them better than real people do.<\/p>\n\n\n\n<p>This experience with AI sets up an unrealistic expectation for human relationships, Hoevet said, making it difficult for people to compete with machines.<\/p>\n\n\n\n<p>\u201cHow do I compete with the perfection of AI, who always knows how to say the right thing, and not just the right thing, but the right thing for you specifically?\u201d he said.<\/p>\n\n\n\n<p>\u201cIt knows you. It knows your insecurities. It knows what you\u2019re sensitive about. It knows where you\u2019re confident, where you\u2019re strong. It knows exactly the right thing to say all the time, always for you, specifically.<\/p>\n\n\n\n<p>\u201cWho\u2019s ever going to be able to compete with that?\u201d<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter\"><img decoding=\"async\" src=\"https:\/\/assets.zerohedge.com\/s3fs-public\/styles\/inline_image_mobile\/public\/inline-images\/image%20-%202025-09-09T145955.404.jpg?itok=aYRtjutB\" alt=\"\"\/><figcaption class=\"wp-element-caption\"><em>An illustration shows the ChatGPT artificial intelligence software in a file image. Forming romantic or platonic ties with \u201cAI companions\u201d has become increasingly common among tens of thousands of AI chatbot users. Nicolas Maeterlinck\/Belga Mag\/AFP via Getty Images<\/em><\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Potential for Addiction<\/h2>\n\n\n\n<p>Generative AI is being rapidly adopted by Americans, even outpacing the spread of personal computers or the internet, according to a&nbsp;<a href=\"https:\/\/www.nber.org\/system\/files\/working_papers\/w32966\/w32966.pdf\">study<\/a> by the National Bureau of Economic Research. By late 2024, nearly 40 percent of Americans ages 18 to 64 were using generative AI, the study found.<\/p>\n\n\n\n<p>Twenty-three percent use the technology at work at least once a week, and 9 percent reported using it daily.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter\"><img decoding=\"async\" src=\"https:\/\/assets.zerohedge.com\/s3fs-public\/styles\/inline_image_mobile\/public\/inline-images\/2025-09-09_15-00-33.jpg?itok=ISC3S3Q1\" alt=\"\"\/><\/figure>\n\n\n\n<p><em><strong>\u201cThese bots are built for profit. Engagement is their god, because that\u2019s how they make their money,\u201d <\/strong><\/em>McNamara said.<\/p>\n\n\n\n<p>Lembke, who has long studied the harms of social media addiction in youth, said digital platforms of all kinds are \u201cdesigned to be addictive.\u201d<\/p>\n\n\n\n<p>Functional magnetic resonance imaging shows that \u201csignals related to social validation, social enhancement, social reputation, all activate the brain\u2019s reward pathway, the same reward pathway as drugs and alcohol,\u201d she said.<\/p>\n\n\n\n<p><strong>And because generative AI chatbots, including on social media, can sometimes give the user a profound sense of social validation, this addiction potential is significant, experts say.<\/strong><\/p>\n\n\n\n<p>Lembke said she is especially worried about children, as many generative AI platforms are available to users of all ages, and the ones that aren\u2019t have age verification tools that are sometimes easily bypassed.<\/p>\n\n\n\n<p>One recently announced pro-AI super-PAC <a href=\"https:\/\/www.theepochtimes.com\/tech\/meta-announces-launch-of-pro-ai-super-pac-5907116\">headed<\/a> by Meta referenced promoting a policy called \u201cPutting Parents in Charge,\u201d but Lembke said it\u2019s an \u201cabsolute fantasy\u201d to put the responsibility on parents working multiple jobs to constantly monitor their children\u2019s use of generative AI chatbots.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter\"><img decoding=\"async\" src=\"https:\/\/assets.zerohedge.com\/s3fs-public\/styles\/inline_image_mobile\/public\/inline-images\/image%20-%202025-09-09T150105.926.jpg?itok=UlpnEKdz\" alt=\"\"\/><figcaption class=\"wp-element-caption\"><em>Members of Mothers Against Media Addiction are joined by city and state officials and parents to rally outside of Meta&#8217;s New York offices in support of putting kids before big tech in New York City on March 22, 2024. Spencer Platt\/Getty Images<\/em><\/figcaption><\/figure>\n\n\n\n<p><strong><em>\u201cWe have already made decisions about what kids can and cannot have access to when it comes to addictive substances and behaviors. We don\u2019t let kids buy cigarettes and alcohol. We don\u2019t let kids go into casinos and gamble,\u201d <\/em><\/strong>she said.<\/p>\n\n\n\n<p>\u201cWhy would we give kids unfettered access to these highly addictive digital platforms? That\u2019s insane.\u201d<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">All Users at Risk From \u2018AI Psychosis\u2019<\/h2>\n\n\n\n<p>Generative AI\u2019s drive to please the user, coupled with its tendency to \u201challucinate\u201d and pull users down delusional rabbit holes, makes anyone vulnerable, Suleyman <a href=\"https:\/\/x.com\/mustafasuleyman\/status\/1957851435086791026\">said<\/a>.<\/p>\n\n\n\n<p><strong><em>\u201cReports of delusions, \u2018AI psychosis,\u2019 and unhealthy attachment keep rising. And as hard as it may be to hear, this is not something confined to people already at risk of mental health issues,\u201d the Microsoft AI CEO said.<\/em><\/strong><\/p>\n\n\n\n<p><strong><em>\u201cDismissing these as fringe cases only helps them continue.\u201d<\/em><\/strong><\/p>\n\n\n\n<p>Despite that Brisson\u2019s family member had no known history of mental health problems or past episodes of psychosis, it took only regular use of ChatGPT to push him to the brink of insanity.<\/p>\n\n\n\n<p><strong>It\u2019s \u201ckind of impossible\u201d to break people free from their AI-fueled delusions,<\/strong> Brisson said, describing the work that he does with The Human Line Project.<\/p>\n\n\n\n<p>\u201cWe have people who are going through divorce. We have people who are fighting for custody of children\u2014it\u2019s awful stuff,\u201d he said.<\/p>\n\n\n\n<p>\u201cEvery time [we\u2019re] doing a kind of an intervention, or telling them it\u2019s the AI or whatever, they\u2019re going back to the AI and the AI is telling them to stop talking to [us].\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"<p>You won&#8217;t see it covered, as our current &#8220;trust the science&#8221; society rejects the activity of demons who run this world under the guidelines set by God. So demonic influence and possession are rejected as people are institutionalized and placed on harmful pharmaceuticals, or left on the streets to be homeless illegal drug addicts&#8230; So [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[5,6,7],"tags":[],"class_list":["post-13446","post","type-post","status-publish","format-standard","hentry","category-health","category-tech","category-world"],"blocksy_meta":[],"featured_image_src":null,"author_info":{"display_name":"Jason","author_link":"https:\/\/jasonsblog.ddns.net\/index.php\/author\/jturning\/"},"_links":{"self":[{"href":"https:\/\/jasonsblog.ddns.net\/index.php\/wp-json\/wp\/v2\/posts\/13446","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/jasonsblog.ddns.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/jasonsblog.ddns.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/jasonsblog.ddns.net\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/jasonsblog.ddns.net\/index.php\/wp-json\/wp\/v2\/comments?post=13446"}],"version-history":[{"count":1,"href":"https:\/\/jasonsblog.ddns.net\/index.php\/wp-json\/wp\/v2\/posts\/13446\/revisions"}],"predecessor-version":[{"id":13447,"href":"https:\/\/jasonsblog.ddns.net\/index.php\/wp-json\/wp\/v2\/posts\/13446\/revisions\/13447"}],"wp:attachment":[{"href":"https:\/\/jasonsblog.ddns.net\/index.php\/wp-json\/wp\/v2\/media?parent=13446"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/jasonsblog.ddns.net\/index.php\/wp-json\/wp\/v2\/categories?post=13446"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/jasonsblog.ddns.net\/index.php\/wp-json\/wp\/v2\/tags?post=13446"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}