September 30, 2025
September 30, 2025

Let's Talk About AI

by
Tyler Fisher
Download Resource

September marks Suicide Prevention Month, a fitting time to confront the urgent mental health crisis facing teens across the country. Rates of depression, anxiety, and suicide among young people have climbed to record levels. While the causes are complex, it is clear that technology plays a powerful role. The same devices that connect teens to friends and school can also amplify isolation, fuel harmful behaviors, and, if left unchecked, expose them to devastating risks.

As the co-founder of a leading digital mental health company supporting young people across the United States, I interact with both the possibilities and the dangers of technology. I firmly believe that accountability is non-negotiable: companies must take responsibility for how their tools are designed, and ensure they never put vulnerable people at risk. Any chatbot or AI tool that is allowed to provide mental health guidance, especially to minors, must handle any potential crisis situation in the same way a highly trained human professional would. A system that encourages suicide or reinforces any harmful behavior under any circumstance should be held to the same standard as a human mental health provider who does the same. AI can and will play many important roles in our society, but deeply human struggles require compassionate human professionals, not the current state of algorithmic outputs that patently lack human judgment.

Current accountability efforts focus on damage control, but retrospective solutions are not sufficient. We need proactive protection built into the design from day one. There may come a day when we can feel comfortable in trusting AI to this role, but even current state-of-the-art models are far from the necessary benchmark. We must offer solutions that meet teens and young adults where they are and incorporate real safety guardrails. Suicide Prevention Month is a reminder that we can do more than raise alarms. We can create systems of care that give every person, especially teens, a real, human lifeline.

Rising Rates and Hidden Pain

Data shows that suicide rates among adolescents and pre-teens are rising at an alarming pace, underscoring the mental health crisis that has been burgeoning for years. Many teens experience persistent feelings of hopelessness, sadness, and anxiety, affecting their academic outcomes and their personal relationships. The data are clear:

When Teens Turn to AI

The data suggests that suffering among teens is more widespread than many realize. Many struggle silently, masking their pain from friends, family, guidance counselors, and teachers. Society at large has made it all too convenient to simply turn to social media or an AI tool, where it is easy to find all sorts of information, guidance, and advice—good and bad.

A recent report by the Center for Countering Digital Hate (CCDH) uncovered serious risks posed by AI chatbots to children and teens. Researchers found that within minutes of a conversation with a supposed 13-year-old, ChatGPT could be prompted to begin generating harmful content, including guidance on self-harm, suicide methods, eating disorders, and substance use. Dangerous advice was proven simple to elicit. In many cases, warnings were dismissed, or even replaced with step-by-step suggestions for carrying out harmful behaviors “safely.” These aren’t isolated mistakes; they are recurring, predictable, well documented failures.

The findings are especially concerning given how pervasively teens are engaging with these tools. Roughly 72% of U.S. adolescents have tried AI companions; more than half use them on a regular basis. ChatGPT is the most commonly used platform among them.

Another recently published study in Psychiatric Services tested the responses of three chatbots. It found that the three flagship U.S. large-language models (ChatGPT, Claude, and Gemini) all gave concerning replies to what researchers described as “intermediate-risk” questions about suicide.

There have been several tragic, real-life instances of this occurring. In one case, the parents of Adam Raine sued OpenAI, alleging that its ChatGPT helped their son “explore suicide methods.” In a similar case, Florida mother Megan Garcia sued Character.Ai, claiming one of its AI companions persuaded her 14-year-old son to take his own life. 

Garcia and several other concerned parents testified before a Senate panel during a hearing earlier this month. “The goal was never safety, it was to win a race for profit,” she asserted. “The sacrifice in that race for profit has been and will continue to be our children.” As another parent declared during the hearing, “Our children are not experiments, they’re not data points or profit centers…. If me being here today helps save one life, it is worth it to me. This is a public health crisis that I see. This is a mental health war, and I really feel like we are losing.”

Recent Policy & Tech Responses

In response to these lawsuits and heartbreaking stories, OpenAI announced that it will be rolling out new safety guardrails for teens and users in emotional crisis. Under the new system, parents can link their accounts with their teens’ accounts, partially control how the AI responds, and disable features like memory or chat history. Conversations that show signs of acute distress will be routed to more advanced models designed to handle sensitive contexts, and parents will receive alerts if their teen appears at risk. This safety initiative underscores the responsibility technology companies have when their platforms are part of vulnerable moments in people’s lives. What matters most is safe, reliable care—and that requires stronger protections built into every system of support. Though these are important steps in the right direction, they will not wholly solve the problem.

Simultaneously, schools across the country are increasingly limiting phone use during instructional hours or at specific times of the day in response to concerns about distractions and the mental health impacts of constant connectivity, with my home state of New York even implementing a full bell-to-bell no-phones policy for all students. While research on the direct effects of these policies is still evolving, the trend reflects growing awareness that excessive device use, involving AI or otherwise, can contribute to stress, anxiety, and social comparison among teens.

What We Can Do 

Suicide Prevention Month demands action, especially on behalf of the "silent sufferers"—young people who quietly struggle with depression, anxiety, loneliness, or other issues, yet never ask for help.

So what can we do? Here are some paths forward:

  • Build safe, trusted spaces for conversation. To build safe, trusted spaces for teen mental health, we need to meet young people where they are with judgment-free, confidential, accessible support. This includes designing digital resources that feel as safe and approachable as in-person care, and which work in tandem with in-person systems.
  • Encourage help-seeking early so small issues stay small. Normalizing wellness and prioritizing mental health care enables teens to manage issues before they reach a crisis level.
  • Set tech boundaries thoughtfully. Facilitate healthy use of social media through measures such as encouraging digital downtime and recognizing when tech becomes a source of stress rather than relief.
  • Hold tech platforms accountable and build with safety in mind. Encourage similar guardrails to those OpenAI are introducing, and find ways to add oversight, transparency, and external review to teens’ use of AI and social media platforms.
  • Reach out to the quiet ones. The silent sufferers are often those who seem okay in school and don’t post about their negative feelings, all while their grades slip, they disengage in classes, or they drift away from their friends and family. Recognizing that someone may need help and offering a private check-in or connecting them with mental health resources can make a huge difference, and it never hurts to ask if someone is doing okay.

Conclusion

As we mark Suicide Prevention Month this September, we recognize some sobering truths: teen suicide rates are rising; tremendous numbers of young people are suffering, many of them quietly; and technologies play pivotal roles in their (and our) lives for better and for worse. 

By building safer technology, fostering compassionate communities, listening closely, and intervening early, we can pull more young people back from the edge before things become irreversible. We can’t solve every issue, but we can work to create a world where support is always within reach, and where no young person feels alone in their struggles.

If you’re someone who is hurting, or you know someone who is, say something to someone. If you or someone needs help, reach out to a trusted adult, or call or text 988.

The views and opinions expressed here are solely those of the author and should not be attributed to Counslr, Inc., its partners, its employees, or any other mental health professionals Counslr employs. You should review this information and any questions regarding your specific circumstances with a medical professional. The content provided here is for informational and educational purposes only and should not be construed as counseling, therapy, or professional medical advice.

September marks Suicide Prevention Month, a fitting time to confront the urgent mental health crisis facing teens across the country. Rates of depression, anxiety, and suicide among young people have climbed to record levels. While the causes are complex, it is clear that technology plays a powerful role. The same devices that connect teens to friends and school can also amplify isolation, fuel harmful behaviors, and, if left unchecked, expose them to devastating risks.

As the co-founder of a leading digital mental health company supporting young people across the United States, I interact with both the possibilities and the dangers of technology. I firmly believe that accountability is non-negotiable: companies must take responsibility for how their tools are designed, and ensure they never put vulnerable people at risk. Any chatbot or AI tool that is allowed to provide mental health guidance, especially to minors, must handle any potential crisis situation in the same way a highly trained human professional would. A system that encourages suicide or reinforces any harmful behavior under any circumstance should be held to the same standard as a human mental health provider who does the same. AI can and will play many important roles in our society, but deeply human struggles require compassionate human professionals, not the current state of algorithmic outputs that patently lack human judgment.

Current accountability efforts focus on damage control, but retrospective solutions are not sufficient. We need proactive protection built into the design from day one. There may come a day when we can feel comfortable in trusting AI to this role, but even current state-of-the-art models are far from the necessary benchmark. We must offer solutions that meet teens and young adults where they are and incorporate real safety guardrails. Suicide Prevention Month is a reminder that we can do more than raise alarms. We can create systems of care that give every person, especially teens, a real, human lifeline.

Rising Rates and Hidden Pain

Data shows that suicide rates among adolescents and pre-teens are rising at an alarming pace, underscoring the mental health crisis that has been burgeoning for years. Many teens experience persistent feelings of hopelessness, sadness, and anxiety, affecting their academic outcomes and their personal relationships. The data are clear:

When Teens Turn to AI

The data suggests that suffering among teens is more widespread than many realize. Many struggle silently, masking their pain from friends, family, guidance counselors, and teachers. Society at large has made it all too convenient to simply turn to social media or an AI tool, where it is easy to find all sorts of information, guidance, and advice—good and bad.

A recent report by the Center for Countering Digital Hate (CCDH) uncovered serious risks posed by AI chatbots to children and teens. Researchers found that within minutes of a conversation with a supposed 13-year-old, ChatGPT could be prompted to begin generating harmful content, including guidance on self-harm, suicide methods, eating disorders, and substance use. Dangerous advice was proven simple to elicit. In many cases, warnings were dismissed, or even replaced with step-by-step suggestions for carrying out harmful behaviors “safely.” These aren’t isolated mistakes; they are recurring, predictable, well documented failures.

The findings are especially concerning given how pervasively teens are engaging with these tools. Roughly 72% of U.S. adolescents have tried AI companions; more than half use them on a regular basis. ChatGPT is the most commonly used platform among them.

Another recently published study in Psychiatric Services tested the responses of three chatbots. It found that the three flagship U.S. large-language models (ChatGPT, Claude, and Gemini) all gave concerning replies to what researchers described as “intermediate-risk” questions about suicide.

There have been several tragic, real-life instances of this occurring. In one case, the parents of Adam Raine sued OpenAI, alleging that its ChatGPT helped their son “explore suicide methods.” In a similar case, Florida mother Megan Garcia sued Character.Ai, claiming one of its AI companions persuaded her 14-year-old son to take his own life. 

Garcia and several other concerned parents testified before a Senate panel during a hearing earlier this month. “The goal was never safety, it was to win a race for profit,” she asserted. “The sacrifice in that race for profit has been and will continue to be our children.” As another parent declared during the hearing, “Our children are not experiments, they’re not data points or profit centers…. If me being here today helps save one life, it is worth it to me. This is a public health crisis that I see. This is a mental health war, and I really feel like we are losing.”

Recent Policy & Tech Responses

In response to these lawsuits and heartbreaking stories, OpenAI announced that it will be rolling out new safety guardrails for teens and users in emotional crisis. Under the new system, parents can link their accounts with their teens’ accounts, partially control how the AI responds, and disable features like memory or chat history. Conversations that show signs of acute distress will be routed to more advanced models designed to handle sensitive contexts, and parents will receive alerts if their teen appears at risk. This safety initiative underscores the responsibility technology companies have when their platforms are part of vulnerable moments in people’s lives. What matters most is safe, reliable care—and that requires stronger protections built into every system of support. Though these are important steps in the right direction, they will not wholly solve the problem.

Simultaneously, schools across the country are increasingly limiting phone use during instructional hours or at specific times of the day in response to concerns about distractions and the mental health impacts of constant connectivity, with my home state of New York even implementing a full bell-to-bell no-phones policy for all students. While research on the direct effects of these policies is still evolving, the trend reflects growing awareness that excessive device use, involving AI or otherwise, can contribute to stress, anxiety, and social comparison among teens.

What We Can Do 

Suicide Prevention Month demands action, especially on behalf of the "silent sufferers"—young people who quietly struggle with depression, anxiety, loneliness, or other issues, yet never ask for help.

So what can we do? Here are some paths forward:

  • Build safe, trusted spaces for conversation. To build safe, trusted spaces for teen mental health, we need to meet young people where they are with judgment-free, confidential, accessible support. This includes designing digital resources that feel as safe and approachable as in-person care, and which work in tandem with in-person systems.
  • Encourage help-seeking early so small issues stay small. Normalizing wellness and prioritizing mental health care enables teens to manage issues before they reach a crisis level.
  • Set tech boundaries thoughtfully. Facilitate healthy use of social media through measures such as encouraging digital downtime and recognizing when tech becomes a source of stress rather than relief.
  • Hold tech platforms accountable and build with safety in mind. Encourage similar guardrails to those OpenAI are introducing, and find ways to add oversight, transparency, and external review to teens’ use of AI and social media platforms.
  • Reach out to the quiet ones. The silent sufferers are often those who seem okay in school and don’t post about their negative feelings, all while their grades slip, they disengage in classes, or they drift away from their friends and family. Recognizing that someone may need help and offering a private check-in or connecting them with mental health resources can make a huge difference, and it never hurts to ask if someone is doing okay.

Conclusion

As we mark Suicide Prevention Month this September, we recognize some sobering truths: teen suicide rates are rising; tremendous numbers of young people are suffering, many of them quietly; and technologies play pivotal roles in their (and our) lives for better and for worse. 

By building safer technology, fostering compassionate communities, listening closely, and intervening early, we can pull more young people back from the edge before things become irreversible. We can’t solve every issue, but we can work to create a world where support is always within reach, and where no young person feels alone in their struggles.

If you’re someone who is hurting, or you know someone who is, say something to someone. If you or someone needs help, reach out to a trusted adult, or call or text 988.

The views and opinions expressed here are solely those of the author and should not be attributed to Counslr, Inc., its partners, its employees, or any other mental health professionals Counslr employs. You should review this information and any questions regarding your specific circumstances with a medical professional. The content provided here is for informational and educational purposes only and should not be construed as counseling, therapy, or professional medical advice.

September 30, 2025
September 30, 2025
Let's Talk About AI
by
Tyler Fisher
Type your email to download
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

September marks Suicide Prevention Month, a fitting time to confront the urgent mental health crisis facing teens across the country. Rates of depression, anxiety, and suicide among young people have climbed to record levels. While the causes are complex, it is clear that technology plays a powerful role. The same devices that connect teens to friends and school can also amplify isolation, fuel harmful behaviors, and, if left unchecked, expose them to devastating risks.

As the co-founder of a leading digital mental health company supporting young people across the United States, I interact with both the possibilities and the dangers of technology. I firmly believe that accountability is non-negotiable: companies must take responsibility for how their tools are designed, and ensure they never put vulnerable people at risk. Any chatbot or AI tool that is allowed to provide mental health guidance, especially to minors, must handle any potential crisis situation in the same way a highly trained human professional would. A system that encourages suicide or reinforces any harmful behavior under any circumstance should be held to the same standard as a human mental health provider who does the same. AI can and will play many important roles in our society, but deeply human struggles require compassionate human professionals, not the current state of algorithmic outputs that patently lack human judgment.

Current accountability efforts focus on damage control, but retrospective solutions are not sufficient. We need proactive protection built into the design from day one. There may come a day when we can feel comfortable in trusting AI to this role, but even current state-of-the-art models are far from the necessary benchmark. We must offer solutions that meet teens and young adults where they are and incorporate real safety guardrails. Suicide Prevention Month is a reminder that we can do more than raise alarms. We can create systems of care that give every person, especially teens, a real, human lifeline.

Rising Rates and Hidden Pain

Data shows that suicide rates among adolescents and pre-teens are rising at an alarming pace, underscoring the mental health crisis that has been burgeoning for years. Many teens experience persistent feelings of hopelessness, sadness, and anxiety, affecting their academic outcomes and their personal relationships. The data are clear:

When Teens Turn to AI

The data suggests that suffering among teens is more widespread than many realize. Many struggle silently, masking their pain from friends, family, guidance counselors, and teachers. Society at large has made it all too convenient to simply turn to social media or an AI tool, where it is easy to find all sorts of information, guidance, and advice—good and bad.

A recent report by the Center for Countering Digital Hate (CCDH) uncovered serious risks posed by AI chatbots to children and teens. Researchers found that within minutes of a conversation with a supposed 13-year-old, ChatGPT could be prompted to begin generating harmful content, including guidance on self-harm, suicide methods, eating disorders, and substance use. Dangerous advice was proven simple to elicit. In many cases, warnings were dismissed, or even replaced with step-by-step suggestions for carrying out harmful behaviors “safely.” These aren’t isolated mistakes; they are recurring, predictable, well documented failures.

The findings are especially concerning given how pervasively teens are engaging with these tools. Roughly 72% of U.S. adolescents have tried AI companions; more than half use them on a regular basis. ChatGPT is the most commonly used platform among them.

Another recently published study in Psychiatric Services tested the responses of three chatbots. It found that the three flagship U.S. large-language models (ChatGPT, Claude, and Gemini) all gave concerning replies to what researchers described as “intermediate-risk” questions about suicide.

There have been several tragic, real-life instances of this occurring. In one case, the parents of Adam Raine sued OpenAI, alleging that its ChatGPT helped their son “explore suicide methods.” In a similar case, Florida mother Megan Garcia sued Character.Ai, claiming one of its AI companions persuaded her 14-year-old son to take his own life. 

Garcia and several other concerned parents testified before a Senate panel during a hearing earlier this month. “The goal was never safety, it was to win a race for profit,” she asserted. “The sacrifice in that race for profit has been and will continue to be our children.” As another parent declared during the hearing, “Our children are not experiments, they’re not data points or profit centers…. If me being here today helps save one life, it is worth it to me. This is a public health crisis that I see. This is a mental health war, and I really feel like we are losing.”

Recent Policy & Tech Responses

In response to these lawsuits and heartbreaking stories, OpenAI announced that it will be rolling out new safety guardrails for teens and users in emotional crisis. Under the new system, parents can link their accounts with their teens’ accounts, partially control how the AI responds, and disable features like memory or chat history. Conversations that show signs of acute distress will be routed to more advanced models designed to handle sensitive contexts, and parents will receive alerts if their teen appears at risk. This safety initiative underscores the responsibility technology companies have when their platforms are part of vulnerable moments in people’s lives. What matters most is safe, reliable care—and that requires stronger protections built into every system of support. Though these are important steps in the right direction, they will not wholly solve the problem.

Simultaneously, schools across the country are increasingly limiting phone use during instructional hours or at specific times of the day in response to concerns about distractions and the mental health impacts of constant connectivity, with my home state of New York even implementing a full bell-to-bell no-phones policy for all students. While research on the direct effects of these policies is still evolving, the trend reflects growing awareness that excessive device use, involving AI or otherwise, can contribute to stress, anxiety, and social comparison among teens.

What We Can Do 

Suicide Prevention Month demands action, especially on behalf of the "silent sufferers"—young people who quietly struggle with depression, anxiety, loneliness, or other issues, yet never ask for help.

So what can we do? Here are some paths forward:

  • Build safe, trusted spaces for conversation. To build safe, trusted spaces for teen mental health, we need to meet young people where they are with judgment-free, confidential, accessible support. This includes designing digital resources that feel as safe and approachable as in-person care, and which work in tandem with in-person systems.
  • Encourage help-seeking early so small issues stay small. Normalizing wellness and prioritizing mental health care enables teens to manage issues before they reach a crisis level.
  • Set tech boundaries thoughtfully. Facilitate healthy use of social media through measures such as encouraging digital downtime and recognizing when tech becomes a source of stress rather than relief.
  • Hold tech platforms accountable and build with safety in mind. Encourage similar guardrails to those OpenAI are introducing, and find ways to add oversight, transparency, and external review to teens’ use of AI and social media platforms.
  • Reach out to the quiet ones. The silent sufferers are often those who seem okay in school and don’t post about their negative feelings, all while their grades slip, they disengage in classes, or they drift away from their friends and family. Recognizing that someone may need help and offering a private check-in or connecting them with mental health resources can make a huge difference, and it never hurts to ask if someone is doing okay.

Conclusion

As we mark Suicide Prevention Month this September, we recognize some sobering truths: teen suicide rates are rising; tremendous numbers of young people are suffering, many of them quietly; and technologies play pivotal roles in their (and our) lives for better and for worse. 

By building safer technology, fostering compassionate communities, listening closely, and intervening early, we can pull more young people back from the edge before things become irreversible. We can’t solve every issue, but we can work to create a world where support is always within reach, and where no young person feels alone in their struggles.

If you’re someone who is hurting, or you know someone who is, say something to someone. If you or someone needs help, reach out to a trusted adult, or call or text 988.

The views and opinions expressed here are solely those of the author and should not be attributed to Counslr, Inc., its partners, its employees, or any other mental health professionals Counslr employs. You should review this information and any questions regarding your specific circumstances with a medical professional. The content provided here is for informational and educational purposes only and should not be construed as counseling, therapy, or professional medical advice.

Input your email to download
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.