Submit Article
Legal Analysis. Regulatory Intelligence. Jurisprudence.
Search articles, case studies, legal topics...
Singapore

USE OF AI CHATBOTS FOR COUNSELLING AND MENTAL HEALTH SUPPORT BY TEENAGERS AND YOUNG ADULTS

Parliamentary debate on ORAL ANSWERS TO QUESTIONS in Singapore Parliament on 2026-02-27.

300 wpm
0%
Chunk
Theme
Font

Debate Details

  • Date: 27 February 2026
  • Parliament: 15
  • Session: 1
  • Sitting: 21
  • Topic: Oral Answers to Questions
  • Subject matter: Use of AI chatbots for counselling and mental health support by teenagers and young adults
  • Keywords: data, given, train, chatbots, counselling, mental, health, support

What Was This Debate About?

The parliamentary exchange on 27 February 2026 focused on the growing use of AI chatbots in the mental health and counselling space, particularly for teenagers and young adults. The questioner raised concerns about how Large Language Models (LLMs) are trained and how training data may be handled when AI systems are deployed to provide supportive or counselling-like interactions. The debate therefore sits at the intersection of (i) AI governance, (ii) data privacy and security, and (iii) the protection of vulnerable users who may be more susceptible to harm from inaccurate or inappropriate guidance.

In legislative terms, the discussion occurred during “Oral Answers to Questions,” a procedural setting in which Members seek clarifications from Ministers and agencies. While such exchanges are not typically the site of new statutory text, they can be highly relevant to legislative intent and regulatory interpretation: they reveal how the Government understands existing legal duties (for example, around personal data protection and consent) and what safeguards it considers necessary as technology evolves.

The questioner’s framing was twofold. First, they asked how the Ministry assesses potential data privacy risks where LLMs may be used to train on data and to help users “understand the data better.” Second, they asked whether safeguards or informed consent standards are being considered to protect vulnerable users. The record also indicates a further concern that AI systems may “overly affirm,” suggesting worries about the reliability, calibration, and potential psychological impact of chatbot responses in mental health contexts.

What Were the Key Points Raised?

1) Data privacy risks in training and deployment. A central issue was the handling of data used to train LLMs and the privacy implications for users who interact with chatbots. The questioner specifically tied the risk assessment to the fact that LLMs “may be used to train the data.” This implies that the debate was not limited to whether personal data is collected during a conversation, but extended to how training pipelines may ingest, transform, or retain information. For legal researchers, this is significant because it highlights that privacy risk can arise both at the “front end” (user inputs) and at the “back end” (training, fine-tuning, and model improvement).

2) Safeguards and informed consent for vulnerable users. The questioner asked whether the Government is considering “safeguards or informed consent standards” to protect vulnerable users. This is a normative and legal question: in a counselling context, users may not fully understand how AI systems operate, what data is processed, or what limitations apply. Teenagers and young adults may also face heightened vulnerability due to developmental factors and the nature of mental health support. The debate therefore raised the question whether existing consent frameworks are adequate for AI-mediated support, or whether additional standards—such as clearer disclosures, age-appropriate consent mechanisms, or enhanced user controls—should be developed.

3) Reliability and psychological safety (“overly affirming” responses). The record indicates concern that AI systems may “overly affirm.” While the excerpt is incomplete, the legal relevance is clear: in mental health support, an AI chatbot’s tone and content can influence a user’s beliefs, coping strategies, and willingness to seek professional help. Overly affirming responses could potentially validate harmful thoughts, reduce urgency to obtain human assistance, or otherwise distort the user’s perception of their situation. This raises questions about duty of care-like considerations in the design and deployment of AI systems, including whether there should be guardrails, escalation protocols, and content moderation standards.

4) The broader governance framework for AI in sensitive domains. Although the debate was framed around counselling and mental health support, its underlying theme is governance of AI in sensitive domains. The questioner’s emphasis on training data and consent suggests that the Government’s approach must address the full lifecycle of AI systems: data sourcing, training and testing, deployment, monitoring, and user communications. For lawyers, this matters because it signals how regulators may interpret privacy and consumer protection principles when AI is used not merely for information retrieval, but for interactions that resemble counselling.

What Was the Government's Position?

The provided debate record contains only the questioner’s statements and does not include the Ministerial response. Accordingly, this article cannot accurately summarise the Government’s specific assurances, proposed safeguards, or references to statutory instruments. However, the questions themselves indicate the policy direction the Government would likely need to address in its answer: risk assessment for training-related privacy impacts, the adequacy of informed consent standards for young users, and the need for safeguards against misleading or psychologically unsafe chatbot behaviour.

In an oral answers context, the Government’s position would typically be expected to clarify (i) what regulatory frameworks apply to AI chatbots and personal data processing, (ii) how privacy risk is assessed and mitigated, and (iii) what operational safeguards exist or are planned for vulnerable users. For legal research, the absence of the Government’s response in the excerpt means that researchers should consult the full Hansard record for the Minister’s reply to identify the precise legal and regulatory commitments made.

1) Legislative intent through regulatory clarification. Even though oral answers do not amend statutes, they can be used as evidence of legislative intent and administrative interpretation. When Members ask about “safeguards or informed consent standards,” they are effectively probing whether existing legal duties are sufficient or whether the Government intends to develop additional standards. If the Minister’s reply references specific legal provisions, guidance, or regulatory principles, that can inform how courts and practitioners interpret obligations under privacy and data protection regimes, especially in AI contexts.

2) AI-specific privacy and consent questions are becoming legally salient. The debate’s focus on LLM training data underscores a key legal development: privacy risk is not confined to direct collection. Lawyers advising organisations deploying AI chatbots will need to consider how training data is sourced, whether it includes personal data, how it is anonymised or de-identified, and what safeguards exist to prevent re-identification or misuse. Similarly, consent and disclosure questions—particularly for teenagers and young adults—may affect how organisations design user interfaces, privacy notices, and age-appropriate consent workflows.

3) Duty of care and safety-by-design considerations in mental health support. The “overly affirming” concern points to a broader legal and compliance theme: when AI is used in mental health support, safety and reliability concerns may intersect with regulatory expectations and contractual or tort-like risk management. While the debate excerpt does not establish a legal duty, it signals that policymakers view chatbot behaviour as potentially consequential. For practitioners, this can translate into compliance steps such as human escalation pathways, limitations on medical or counselling claims, monitoring for harmful outputs, and documentation of risk assessments.

4) Practical implications for compliance and documentation. For legal research and advisory work, the debate highlights the types of questions regulators may ask: how data is used to train models, what privacy risk assessments are conducted, what informed consent or disclosures are provided, and what safeguards prevent harmful or misleading responses. Organisations can use these themes to structure internal governance documentation—privacy impact assessments, model cards, user consent flows, and incident response plans—so that they align with the concerns raised in Parliament.

Source Documents

This article summarises parliamentary proceedings for legal research and educational purposes. It does not constitute an official record.

Written by Sushant Shukla
1.5×

More in

Legal Wires

Legal Wires

Stay ahead of the legal curve. Get expert analysis and regulatory updates natively delivered to your inbox.

Success! Please check your inbox and click the link to confirm your subscription.