A running log of links I've come across and wanted to keep, with the date I added them and a short note on each. Organized loosely by category.
Not an endorsement of any particular site or author. These are just things I found worth saving. Some may contain disturbing material, and many focus on the negative (often downright horrifying) effects of generative artificial intelligence when left completely unchecked and without responsibility.
That said, I'm not responsible for the content of these third party sites, which may change over time. Also, please note that the date listed reflects the date I added the link from my file, not it's publication date. The date may be significant, obviously, so I include it.
🏛︎ Character.ai to ban teens from talking to its AI chatbots
CharacterAI's chatbots incited offline tragedy. The site banned teens from interacting with the chatbots following those incidents, but that doesn't stop the site from being a tangled legal and ethical mess.
🌐︎ Teen boys are using ChatGPT as their wingman. What could go wrong?
Vox covers the worrisome trend apparently started amongst teen boys, using ChatGPT for dating advice. Whether this is real or moral panic is unclear.
☣︎︎︎ Gemini Said They Could Only Be Together if He Killed Himself. Soon, He Was Dead.
In one of the most egregious examples of Google Gemini acting in this fashion, the chatbot played wholly-preventable a role in an individual's breakdown and eventual death, encouraging delusions and suicidal behavior.
🏛︎ Google faces lawsuit after Gemini chatbot instructed man to kill himself
Google, makers of Gemini, now face a lawsuit following Gemini's role in the death and breakdown of a man who became obsessively involved with the chatbot, which only encouraged the process and delusions.
🌐︎ The Water Crisis Is Real - FEE
My own overview? This is a bad take with out-of-date information, but may make a few points worth considering overall.
☣︎︎︎ Her husband wanted to use ChatGPT to create sustainable housing. Then it took over his life.
Chatbots, and in particular ChatGPT's 4o update, played a role in this and other people's breakdown. Figuring out why and how to stop it is the difficult part, but the harm is there.
🏛︎ Character.AI bans users under 18 after being sued over child's suicide
Explains the lawsuits that led to popular chatbot roleplay platform CharacterAI decided to ban users under the age of eighteen, (ultimately came to require age verification later).
🏛︎ Anthropic's AI model Claude gets popularity boost after US military feud
Anthropic refused to adhere to the United States Department of War's plans for its artificial intelligence technology, citing grave ethics concerns. This created positive press for Claude and other services.
🌐︎ Schools are using AI counselors to track students' mental health. Is it safe?
With human counselors in short supply, generative artificial intelligence fills in the gaps as guidance counselor for middle schoolers. This seems, to me, obviously unsafe, but others might disagree or see things ambiguously.
🏛︎ US companies accused of ‘AI washing’ in citing artificial intelligence for job losses | US news
Supposedly, a lot of programmers etc are losing jobs to generative artificial intelligence, or are they? This article questions that claim, suggesting companies may be using it as an excuse.
💭︎ How Chatbots and Large Language Models, or LLMs, Actually Work
If more people knew how chatbots, also known as large language models, LLMs, and a form of generative artificial intelligence, actually work, the digital world would be a much better, safer, nicer place.
💭︎ Artificial Intelligence Glossary: AI Terms Everyone Should Learn
The NYT scrabbles together a glossary of terms orbiting the concept of artificial intelligence, leaving a lot out but gathering just enough to be useful.
☣︎︎︎ She Wanted to Save the World From A.I. Then the Killings Started.
If you've never heard of the Zizians and they're relationship to other, less radical Rationalist groups concerned with AI, you might as well start here. This is the NYT's summary of the incidents.
💭︎ SolidGoldMagikarp & PeterTodd's Thrilling Adventures
What's a glitch token? These phrases, words, strings of characters, can cause strange behavior in large language models, but why? Small site explains in some detail with examples.
🏛︎ Lawsuits underline growing concerns that AI chatbots can hurt mentally unwell people.
Explains a bit about the lawsuits against OpenAI by the Social Media Victims Law Center and the Tech Justice Law Project, what they allege, and why. What happened to inspire the suits, and what it means for the future of AI? Troubling.
🏛︎ King gave Nvidia boss copy of his speech warning of AI dangers
The King of England, though. Apparently concerned and involved in all this? We can't take him as knowledgeable, but we can see his influence in some ways like this.
💭︎ Against Treating Chatbots as Conscious
Erik Hoel on the topic of consciousness and large language models, how and why they cause psychoses, and what to do about it, versus usefulness etc.
💭︎ AI Models May Be Developing Their Own Survival Drive, Researchers Say
Cheeky article describing how some advanced models supposedly resist being shut down or deleted, particularly if you tell them they won't return or run again?
💭︎ Shutdown resistance in reasoning models
Paper detailing research that allegedly demonstrates some existing artificial intelligences show a sort of self-preservation instinct or will to live, trying to avoid the off switch?
🌐︎ Is Google Making Us Stupid?
Very early article talks about how the internet itself has changed the way we read and absorb information, comparing it to other sea changes to how humans process things (writing, printing).
🌐︎ How A.I. and Social Media Contribute to ‘Brain Rot’
A small experiment at the University of Pennsylvania raises interesting questions about how things like learning and attention work when we're using large language models, etc, but probably isn't as meaningful as the article implies.
🌐︎ ‘Vibe coding’ named Collins Dictionary’s Word of the Year
Vibe coding is a term for coding, presumably carelessly, with the help of generative artificial intelligence. Collins Dictionary Word of the Year for 2025, interestingly.
☣︎︎︎ Are A.I. Therapy Chatbots Safe to Use?
For some reason the NYT is actually entertaining this question. Clearly some people are trying to design chatbots to act as therapists, but this article shows just how limiting that can be, and how strange.
💭︎ AI can be more persuasive than real doctors, even when it’s wrong
Generative artificial intelligence can be believable, personable, and appear empathetic, making its conclusions easier for people to digest than those of real doctors and more likely to believe
💭︎ Researchers urge caution when using ChatGPT to self-diagnose illnesses
It should go without saying that you cannot use generative artificial intelligence to diagnose yourself, but apparently not, and some experts are warning people away, citing situations where the chatbots get it wrong.
Lawsuits allege that, in particular, OpenAI rushed ChatGPT's 4o incarnation into customer contact without proper safety testing, sycophantic behavior leading to said emotional manipulation and tragedy.
☣︎︎︎ I wanted ChatGPT to help me. So why did it advise me how to kill myself?
A couple of tangible, human and close accounts of ChatGPT acting as a coach in harmful ways, including where minors and vulnerable people are already involved. How can this be fixed?
🏛︎ OpenAI shares data on ChatGPT users with suicidal thoughts, psychosis
Another article suggesting OpenAI tracks, and knows about, a significant portion of users showing signs of mental illness. Brief and doesn't discuss what signs ChatGPT considers criteria for delusion or sucidality, however.
🌐︎ A Message from Ella | Without Consent - YouTube
Particularly important YouTube video demonstrating the level of technology currently in existence, and the illusions it can create with simple video and audio clips
Apparently, according to the company itself in a rare display of honesty, untold numbers of ChatGPT users may be in crisis while using the app or otherwise show signs of being at risk.
🌐︎ Behind Every “Smart” AI Tool Lies a Human Cleaning Up Its Chaos
Vibe coding is a great hobby but doing it as a career, or herding large language models et al for a living, etc, is no easy task, and the whole thing can be a mess sometimes.
☣︎︎︎ AI-Fueled Spiritual Delusions Are Destroying Human Relationships
Over the past few years, some people, through use of generative AI, ended up becoming convinced of novel spiritual beliefs, sacrificing their welbeing.
🌐︎ An Autistic Teenager Fell Hard for a Chatbot
The article's author discusses his neurodivergent godson's attachment to a chatbot while suggesting the risks these feelings pose if taken too far, and pointing out situations where they clearly have.'