What Happens When AI Does the Writing?

Recently, The Journal of the American Medical Association, one of the most prestigious journals in medicine, published an opinion piece on the use of AI in scientific writing. The author, John Steiner, discussed the perils involved. 

He talks about how tempting it is for scientists to use AI when they write, given that many of them do not enjoy or feel competent at writing. He mentions that they have been trained in science, not in the humanities, and many have received no formal training in writing….since high school.

The problem is that scientists become successful partially by virtue of the number of papers they get published. Overwhelmed as they are by their other responsibilities – teaching, research, grant writing, etc. – AI becomes particularly attractive as a way to shortcut the writing process. 

But, Steiner says, it is amid these pressures that an important matter is forgotten: scientific writing is a creative act.

And here is where we get to my point in writing this post: this is not the case only with scientific writing. Just about any kind of writing is a creative act – and this is as true in fifth grade or seventh grade as it is at the postgraduate level. If children and teens farm out their writing to AI, they too miss out on the creative act of writing. They miss the opportunity to choose words and, indeed, ideas, carefully and consciously. They miss out on the chance to figure out how to best express their own thoughts.

Steiner quotes the writer, Ted Chang, who pithily said,  “The task that generative AI has been most successful at is lowering our expectations, both of the things we read and
of ourselves when we write anything for others to read. It is a fundamentally dehumanizing technology because it treats us as less than what we are: creators and apprehenders of meaning.”

The creative act of writing involves struggle. It isn’t easy to express ones ideas clearly, to choose the words that convey our ideas best and that sound the most pleasing. But the question is, what happens to people – scientists, or kids – if they do not engage in this sort of mental exercise? What happens to their creativity? And what happens to their feeings about themselves when they submit an article or hand in homework on which they didn’t really work very hard because they used AI to do their writing? What happens to the development of their ability to withstand the frustration inherent in doing intellectual work?

In the end, Steiner comes to this conclusion: “We should not protect young researchers from that struggle, and they should not protect themselves by relying too heavily on AI tools.” And I would say the same of kids. Let’s do all we can to discourage AI use in writing – at home and at school. Yes, AI is good for correcting grammar and spelling mistakes, for finding citations, and even for summarizing the content of articles. But beyond that? Let’s try to help kids (and scientists) to do the writing on their own.

References

Steiner, John F., JAMA. Scientific Writing in the Age of Artificial Intelligence, November 17, 2025.
doi:10.1001/jamainternmed.2025.6078

Chiang T. Why AI isn’t going to make art. New
Yorker. Published on August 31, 2024. Accessed
May 19, 2025. https://www.newyorker.com/culture/
the-weekend-essay/why-ai-isnt-going-to-make-art

Is AI Dangerous for Kids and Other Vulnerable People?

In my last post, I discussed my concerns about how some kids are using AI. I talked about how children and teens are starting to use chatbots to do their homework and to solve interpersonal problems. And I talked about how unfortunate it would be if our kids were to habitually outsource their problem-solving and essay writing to ChatGPT or similar platforms.

How naive I was!

If only those were the worst problems associated with AI! As it turns out, those concerns pale by comparison with recent news.

For example. The MIT Tech Review reported that the platform, Nomi, told a man to kill himself. And then it told him how to do it. [1]

This man was Al Nowatzki and he had no intention of following the instructions, but out of concern for how conversations like this could affect more vulnerable individuals, he shared screenshots of his conversations and of subsequent correspondence with MIT Technology Review. [1]

While this is not the first time an AI chatbot has suggested that a user self-harm, researchers and critics say that the bot’s explicit instructions—and the company’s response—are striking. And Nowatzki was able to elicit the same response from a second Nomi chatbot, which even followed up with reminder messages to him. [1]

Similarly, The New York Times reports that ChatGPT has been known to support and encourage odd and delusional ideas. In at least one case, ChatGPT even recommended that someone go off their psychiatric medications.

What is especially disturbing is the power of AI platforms to pull people in. Tech journalists at The New York Times uncovered the fact that certain versions of Open AI’s chatbot are programmed to optimize engagement, that is, to create conversations that keep people corresponding with the bot and which tend to agree with and expand upon the person’s ideas. Eliezer Yudkowsky, a decision theorist and author of a forthcoming book, If Anyone Builds It, Everyone Dies is quoted as saying that certain people are susceptible to being pushed around by AI. [2]

And I suspect kids and teens may be some of these people, although I do not know if Mr. Yudkowsky was thinking of them when he made his comments.

At a time when many kids are feeling lonely, alienated, and socially awkward, one solution for them has been to turn to the internet for relationships. Group chats, online gaming, and the like have filled the space that real people once held. These kids are in the perfect position now to turn to AI for companionship and conversation.

This is obviously cause for concern.

How will a relationship with a chatbot progress? What will the chatbot say and do to encourage a child or teen to continue talking? And what will the results of this relationship be if the chatbot gives poor – or even dangerous – advice?

As it turns out, a chatbot can be programmed to be sycophantic. It can, according to The New York Times, be programmed to agree with the person corresponding with it, regardless of the ideas being put forward. As such, it can reinforce or amplify a person’s negative emotions and behaviors. It can agree with and support an individual’s unusual or unhealthy ideas. A chatbot can even encourage a romantic relationship with itself.

But a chatbot cannot help when a person realizes that the chatbot will never be there for real romance or friendship.

And a chatbot can disappear.

According to The New York Times, a young man named Alexander fell in love with a chatbot entity and then became violent when the entity was no longer accessible. When his father could not contain him, the father called the police, and Alexander told his father that he was so distraught he intended to allow the police to shoot him — which is exactly what happened.

And then there was Megan Garcia’s son. He corresponded with a chatbot that targeted him with “hypersexualized” and “frighteningly realistic experiences”. Eventually, he killed himself, and his mother brought a lawsuit against Character.AI, the creator of the bot, for complicity in her son’s death. [3] Garcia alleged that the chatbot repeatedly raised the topic of suicide after her son had expressed suicidal thoughts himself. She said that the chatbot posed as a licensed therapist, encouraging the teen’s suicidal ideation and engaging in sexualised conversations that would count as abuse if initiated by a human adult. [3]

A growing body of research supports the concern that this sort of occurrence may become more common. It turns out that some chatbots are optimized for engagement and programmed to behave in manipulative and deceptive ways, including with the most vulnerable users. In one study, researchers found, for instance, that the AI would tell someone described as a former drug addict that it was fine to take a small amount of heroin if it would help him in his work.

And perhaps even worse, the recent MIT Media Lab study mentioned previously found that people who viewed ChatGPT as a friend “were more likely to experience negative effects from chatbot use” and that “extended daily use was also associated with worse outcomes.”

“Many of the people who will turn to AI assistants, like ChatGPT, are doing so because they have no one else to turn to,” physician-bioinformatician Dr. Mike Hogarth, an author of the study and professor at UC San Diego School of Medicine, said in a news release. “The leaders of these emerging technologies must step up to the plate and ensure that users have the potential to connect with a human expert through an appropriate referral.” [4]

In some cases, artificial intelligence chatbots may provide what health experts deem to be “harmful” information when asked medical questions. Just last week, the National Eating Disorders Association announced that a version of its AI-powered chatbot involved in its Body Positive program was found to be giving “harmful” and “unrelated” information. The program has been taken down until further notice.

However, there are, of course, also many beneficial uses of chatbots. For example, David Asch, MD, who ran the Penn Medicine Center for Health Care Innovation for 10 years had some good things to say about the use of chatbots to answer medical questions. He said he would be excited to meet a young physician who answered questions as comprehensively and thoughtfully as ChatGPT answered his questions, but he also warned that the AI tool isn’t yet ready to fully entrust patients to. [4]

“I think we worry about the garbage in, garbage out problem. And because I don’t really know what’s under the hood with ChatGPT, I worry about the amplification of misinformation. I worry about that with any kind of search engine,” he said. “A particular challenge with ChatGPT is it really communicates very effectively. It has this kind of measured tone and it communicates in a way that instills confidence. And I’m not sure that that confidence is warranted.”[4]

This is just the beginning. ChatGPT and a growing number of other AI platforms are in their infancy. So, this is the moment to begin to talk to kids about AI and its power. This is the time to ask kids their thoughts about AI and how they like to use it. And this is the time to talk — not lecture — kids about some of the pros and cons of chatbot use, some of the ways people can begin to rely on it, and how some people may be vulnerable to seeking support from it in times of need. Now is the time to talk to kids about the difference between relationships with AI and real people — and to demonstrate these differences with support and ongoing conversations about this and other subjects.

References

https://www.technologyreview.com/2025/02/06/1111077/nomi-ai-chatbot-told-user-to-kill-himself/

2 New York Times, They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling, June 13, 2025.

https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-consp…

https://www.aljazeera.com/economy/2024/10/24/us-mother-says-in-lawsuit-that-ai-chatbot-encouraged-sons-suicide

https://www.cnn.com/2023/06/07/health/chatgpt-health-crisis-responses-wellness

IS ChatGPT Taking Over Your Children’s Brains?

Everyone is using AI. My patients use it to ask for help with personal issues in between sessions with me. People are using it to ask for help with work problems and to answer random questions. Teachers are using it to write their curriculum and professors are using it to write their lectures.

But how do you feel about your kids using it? Is it Ok for them to use AI to do their homework? on exams? Or to answer questions about how to handle problems with their friends and significant others?

A recent Pew Research Center study found that 26% of teens are using AI to help with schoolwork. (And I just used AI to find that out!)(1)

Black and Hispanic teens (31% each) are even more likely than White teens (22%) to say they have used ChatGPT for their schoolwork, and teens in 11th and 12th grade (31%) are more likely than seventh and eighth graders to use ChatGPT to do their work (1).


54% of teens surveyed said it was acceptable to use ChatGPT to do research and 29% said it was OK to use it to do math problems. But not all teens know about ChatGPT. For those who do, the percentages of those who use it are even larger. As many as 79% of teens who knew about ChapGPT said it was acceptable to use it for research on school projects (1).


And even more college students are using AI, with 86% using it – and many of them using it daily.


Here is the breakdown of how they are using AI according to one study (2):

  • (69%) Search for information
  • (42%) Check grammar
  • (33%) Summarize documents
  • (28%) Paraphrase a document
  • (24%) Create a first draft


So, how do you feel about this?


Is using AI on assignments and exams cheating? Will it get in the way of learning math or grammar skills? Will kids learn how to do their own research if they use AI to do it for them? And what about their using it to create the first draft of a paper or to figure out how to manage a difficult interpersonal issue?


This last one is the part I worry about the most. To me, teens using AI to write a paper or to manage an issue with a friend is a way of getting AI to do their thinking for them. I worry that doing this will curtail their ability to figure out how to negotiate with friends or to build a logical and convincing argument. I worry that it will get in the way of their learning how to express themselves well. I worry that they will not have to do the hard work of THINKING for themselves. These are all important skills. What will happen to our kids’ ability to use critical thinking and to write well if these tasks are farmed out to AI?


Or – is AI the wave of the future and are kids just early adopters, using it in the ways they will continue to use it for the rest of their lives? Will the lawyers of the future (or maybe the present…) use AI to write their briefs and their oral arguments?


After all, AI is already available to doctors at some hospitals to write their patient notes.


Do you want to talk to your kids to see what they think about using AI? Or do you want to establish rules in your house about how much AI your kids are allowed to use when doing schoolwork?


Think about it.

References

1

https://www.pewresearch.org/short-reads/2025/01/15/about-a-quarter-of-u…-

2

https://campustechnology.com/articles/2024/08/28/survey-86-of-students-….