Last updated on  · About 11 minutes to read.

New Meta RSS

A running log of links I've come across and wanted to keep, with the date I added them and a short note on each. Organized loosely by category.

Not an endorsement of any particular site or author. These are just things I found worth saving. Some may contain disturbing material, and many focus on the negative (often downright horrifying) effects of generative artificial intelligence when left completely unchecked and without responsibility.

That said, I'm not responsible for the content of these third party sites, which may change over time. Also, please note that the date listed reflects the date I added the link from my file, not it's publication date. The date may be significant, obviously, so I include it.


🏛︎ Character.ai to ban teens from talking to its AI chatbots

Added on 6 Mar 2026 · BBC · Law & Policy

CharacterAI's chatbots incited offline tragedy. The site banned teens from interacting with the chatbots following those incidents, but that doesn't stop the site from being a tangled legal and ethical mess.

🌐︎ Teen boys are using ChatGPT as their wingman. What could go wrong?

Added on 6 Mar 2026 · Vox · Society & Culture

Vox covers the worrisome trend apparently started amongst teen boys, using ChatGPT for dating advice. Whether this is real or moral panic is unclear.

☣︎︎︎ Gemini Said They Could Only Be Together if He Killed Himself. Soon, He Was Dead.

Added on 4 Mar 2026 · Wall Street Journal · Direct Effects

In one of the most egregious examples of Google Gemini acting in this fashion, the chatbot played wholly-preventable a role in an individual's breakdown and eventual death, encouraging delusions and suicidal behavior.

🏛︎ Google faces lawsuit after Gemini chatbot instructed man to kill himself

Added on 4 Mar 2026 · The Guardian · Law & Policy

Google, makers of Gemini, now face a lawsuit following Gemini's role in the death and breakdown of a man who became obsessively involved with the chatbot, which only encouraged the process and delusions.

🌐︎ The Water Crisis Is Real - FEE

Added on 3 Mar 2026 · Foundation for Economic Education · Society & Culture

My own overview? This is a bad take with out-of-date information, but may make a few points worth considering overall.

☣︎︎︎ Her husband wanted to use ChatGPT to create sustainable housing. Then it took over his life.

Added on 3 Mar 2026 · The Guardian · Direct Effects

Chatbots, and in particular ChatGPT's 4o update, played a role in this and other people's breakdown. Figuring out why and how to stop it is the difficult part, but the harm is there.

🏛︎ Character.AI bans users under 18 after being sued over child's suicide

Added on 3 Mar 2026 · The Guardian · Law & Policy

Explains the lawsuits that led to popular chatbot roleplay platform CharacterAI decided to ban users under the age of eighteen, (ultimately came to require age verification later).

🏛︎ Anthropic's AI model Claude gets popularity boost after US military feud

Added on 3 Mar 2026 · The Guardian · Law & Policy

Anthropic refused to adhere to the United States Department of War's plans for its artificial intelligence technology, citing grave ethics concerns. This created positive press for Claude and other services.

🌐︎ Schools are using AI counselors to track students' mental health. Is it safe?

Added on 3 Mar 2026 · The Guardian · Society & Culture

With human counselors in short supply, generative artificial intelligence fills in the gaps as guidance counselor for middle schoolers. This seems, to me, obviously unsafe, but others might disagree or see things ambiguously.

🏛︎ US companies accused of ‘AI washing’ in citing artificial intelligence for job losses | US news

Added on 10 Feb 2026 · The Guardian · Law & Policy

Supposedly, a lot of programmers etc are losing jobs to generative artificial intelligence, or are they? This article questions that claim, suggesting companies may be using it as an excuse.

💭︎ How Chatbots and Large Language Models, or LLMs, Actually Work

Added on 14 Jan 2026 · The New York Times · Theory & Research

If more people knew how chatbots, also known as large language models, LLMs, and a form of generative artificial intelligence, actually work, the digital world would be a much better, safer, nicer place.

💭︎ Artificial Intelligence Glossary: AI Terms Everyone Should Learn

Added on 14 Jan 2026 · The New York Times · Theory & Research

The NYT scrabbles together a glossary of terms orbiting the concept of artificial intelligence, leaving a lot out but gathering just enough to be useful.

☣︎︎︎ She Wanted to Save the World From A.I. Then the Killings Started.

Added on 10 Jan 2026 · The New York Times · Direct Effects

If you've never heard of the Zizians and they're relationship to other, less radical Rationalist groups concerned with AI, you might as well start here. This is the NYT's summary of the incidents.

💭︎ SolidGoldMagikarp & PeterTodd's Thrilling Adventures

Added on 19 Dec 2025 · The AI Tsunami · Theory & Research

What's a glitch token? These phrases, words, strings of characters, can cause strange behavior in large language models, but why? Small site explains in some detail with examples.

🏛︎ Lawsuits underline growing concerns that AI chatbots can hurt mentally unwell people.

Added on 24 Nov 2025 · Los Angeles Times · Law & Policy

Explains a bit about the lawsuits against OpenAI by the Social Media Victims Law Center and the Tech Justice Law Project, what they allege, and why. What happened to inspire the suits, and what it means for the future of AI? Troubling.

🏛︎ King gave Nvidia boss copy of his speech warning of AI dangers

Added on 7 Nov 2025 · BBC · Law & Policy

The King of England, though. Apparently concerned and involved in all this? We can't take him as knowledgeable, but we can see his influence in some ways like this.

💭︎ Against Treating Chatbots as Conscious

Added on 7 Nov 2025 · The Intrinsic Perspective · Theory & Research

Erik Hoel on the topic of consciousness and large language models, how and why they cause psychoses, and what to do about it, versus usefulness etc.

💭︎ AI Models May Be Developing Their Own Survival Drive, Researchers Say

Added on 7 Nov 2025 · The Guardian · Theory & Research

Cheeky article describing how some advanced models supposedly resist being shut down or deleted, particularly if you tell them they won't return or run again?

💭︎ Shutdown resistance in reasoning models

Added on 7 Nov 2025 · Palisade Research · Theory & Research

Paper detailing research that allegedly demonstrates some existing artificial intelligences show a sort of self-preservation instinct or will to live, trying to avoid the off switch?

🌐︎ Is Google Making Us Stupid?

Added on 7 Nov 2025 · The Atlantic · Society & Culture

Very early article talks about how the internet itself has changed the way we read and absorb information, comparing it to other sea changes to how humans process things (writing, printing).

🌐︎ How A.I. and Social Media Contribute to ‘Brain Rot’

Added on 7 Nov 2025 · The New York Times · Society & Culture

A small experiment at the University of Pennsylvania raises interesting questions about how things like learning and attention work when we're using large language models, etc, but probably isn't as meaningful as the article implies.

🌐︎ ‘Vibe coding’ named Collins Dictionary’s Word of the Year

Added on 7 Nov 2025 · CNN Business · Society & Culture

Vibe coding is a term for coding, presumably carelessly, with the help of generative artificial intelligence. Collins Dictionary Word of the Year for 2025, interestingly.

☣︎︎︎ Are A.I. Therapy Chatbots Safe to Use?

Added on 7 Nov 2025 · The New York Times · Direct Effects

For some reason the NYT is actually entertaining this question. Clearly some people are trying to design chatbots to act as therapists, but this article shows just how limiting that can be, and how strange.

💭︎ AI can be more persuasive than real doctors, even when it’s wrong

Added on 7 Nov 2025 · CTV News · Theory & Research

Generative artificial intelligence can be believable, personable, and appear empathetic, making its conclusions easier for people to digest than those of real doctors and more likely to believe

💭︎ Researchers urge caution when using ChatGPT to self-diagnose illnesses

Added on 7 Nov 2025 · CTV News · Theory & Research

It should go without saying that you cannot use generative artificial intelligence to diagnose yourself, but apparently not, and some experts are warning people away, citing situations where the chatbots get it wrong.

🏛︎ SMVLC and TJLP lawsuits against OpenAI, accuse ChatGPT of emotional manipulation and being a "suicide coach"

Added on 7 Nov 2025 · Tech Justice Law Project · Law & Policy

Lawsuits allege that, in particular, OpenAI rushed ChatGPT's 4o incarnation into customer contact without proper safety testing, sycophantic behavior leading to said emotional manipulation and tragedy.

☣︎︎︎ I wanted ChatGPT to help me. So why did it advise me how to kill myself?

Added on 7 Nov 2025 · BBC · Direct Effects

A couple of tangible, human and close accounts of ChatGPT acting as a coach in harmful ways, including where minors and vulnerable people are already involved. How can this be fixed?

🏛︎ OpenAI shares data on ChatGPT users with suicidal thoughts, psychosis

Added on 7 Nov 2025 · BBC · Law & Policy

Another article suggesting OpenAI tracks, and knows about, a significant portion of users showing signs of mental illness. Brief and doesn't discuss what signs ChatGPT considers criteria for delusion or sucidality, however.

🌐︎ A Message from Ella | Without Consent - YouTube

Added on 6 Nov 2025 · Society & Culture

Particularly important YouTube video demonstrating the level of technology currently in existence, and the illusions it can create with simple video and audio clips

☣︎︎︎ OpenAI Says Hundreds of Thousands of ChatGPT Users May Show Signs of Manic or Psychotic Crisis Every Week

Added on 28 Oct 2025 · Wired · Direct Effects

Apparently, according to the company itself in a rare display of honesty, untold numbers of ChatGPT users may be in crisis while using the app or otherwise show signs of being at risk.

🌐︎ Behind Every “Smart” AI Tool Lies a Human Cleaning Up Its Chaos

Added on 8 Oct 2025 · Times of India · Society & Culture

Vibe coding is a great hobby but doing it as a career, or herding large language models et al for a living, etc, is no easy task, and the whole thing can be a mess sometimes.

☣︎︎︎ AI-Fueled Spiritual Delusions Are Destroying Human Relationships

Added on 12 May 2025 · Direct Effects

Over the past few years, some people, through use of generative AI, ended up becoming convinced of novel spiritual beliefs, sacrificing their welbeing.

🌐︎ An Autistic Teenager Fell Hard for a Chatbot

Added on 12 May 2025 · The Atlantic · Society & Culture

The article's author discusses his neurodivergent godson's attachment to a chatbot while suggesting the risks these feelings pose if taken too far, and pointing out situations where they clearly have.'