State Spotlight — California’s SB 243
California establishes first-in-nation safeguards for AI Companion chatbots.
The Bill at a Glance
Enacted on October 13, 2025, California Senate Bill 243 adds the nation’s first targeted regulatory framework for so-called “companion chatbots,” AI systems designed to simulate human-like conversation and sustain emotionally resonant relationships over time. Authored by Senator Robert Padilla and signed into law by Governor Gavin Newsom, SB 243 creates a new compliance domain for AI developers and deployers who build or operate systems capable of social interaction, especially those accessible to minors.
“The stakes are too high to allow vulnerable users to continue to access this technology without proper guardrails in place to ensure transparency, safety and, above all, accountability.”
Taking effect immediately upon chaptering (October 13, 2025), with several provisions beginning July 1, 2027, the law imposes safety obligations, transparency requirements, and content restrictions to prevent psychological manipulation, promote mental health, and protect vulnerable populations from inappropriate or harmful AI-generated responses.
In doing so, this law breaks new ground by focusing not on frontier model scale or compute intensity, but on emotional influence and relational interaction. SB 243 makes California among the first U.S. jurisdictions to set statutory duties for companionship‑style AI chatbots interacting with humans with transparency, safety, and accountability built in.
Scope
SB 243 targets a specific and increasingly common subset of AI systems, companion chatbots. These systems are distinct from conventional customer-service bots, virtual assistants, or game characters; which are expressly excluded from the bill’s scope if they don’t attempt to simulate ongoing social connection.
Scope: SB 243’s jurisdiction is defined not by model size, but by intended use case and user impact.
Model Definition: Covers “companion chatbots,” defined as an artificial intelligence system with a natural language interface that produces adaptive, human-like responses and is capable of forming or sustaining a relationship across multiple sessions. This definition captures chatbots that simulate friendship, romantic interest, or therapeutic support; including AI “friends,” digital confidantes, and social wellness bots.
Risk Focus: Focuses on mitigating emotional and psychological harm from sustained AI interaction, especially among minors and users in mental health crisis. The law addresses the growing concern that human-like AI systems, particularly those simulating intimacy or companionship; can mislead users, distort perceptions, encourage dependency, or even influence harmful behaviors.
Enforcement: Grants enforcement authority primarily through private rights of action, allowing individuals to sue for injunctive relief, actual damages, or statutory damages of $1,000 per violation. Additionally, operators are subject to regulatory reporting requirements beginning July 1, 2027, overseen by the California Office of Suicide Prevention.
Requirements
SB 243 imposes a range of operational, design, and reporting obligations on companion chatbot operators, with heightened scrutiny for systems accessed by minors. First, it requires a clear and conspicuous disclosure if a reasonable person interacting with the chatbot could be misled into believing they are speaking with a human. This disclosure must be made upfront and periodically for known minors.
Operators must also disclose that the chatbot may not be suitable for some minors. The law requires operators to implement protocols that detect and prevent the generation of responses that include suicidal ideation or promote self-harm. If a user exhibits suicidal ideation, the chatbot must provide a referral to a crisis service, such as a suicide hotline or crisis text line, and the operator must publish its prevention protocol on a public-facing website.
Under the law:
Mandatory Reporting: Beginning July 1, 2027, annual reporting obligations to the state’s Office of Suicide Prevention (OSP) around crisis‑referral notifications, protocols for detecting/removing suicidal ideation content, and usage of evidence‑based measurement methods.
Disclosure Requirements: Special obligations when the user is a minor: disclosure of the AI nature of the system, periodic reminders to take a break, and restrictions around sexually explicit material and self‑harm content.
Notice: If a reasonable person interacting with the system would be misled into thinking they’re talking to a human, the operator must provide a clear and conspicuous notice that the chatbot is artificially generated and not human.
Scope and Enforcement: Private civil right of action for a person who suffers “injury in fact” due to a violation: injunctive relief, damages (actual or $1,000 per violation), and attorney’s fees and costs.
By anchoring the definition in interaction type and emotional function, rather than technical architecture or model size, SB 243 opens a new front in use-case-specific AI regulation.
Why It Matters
SB 243 signals a shift in AI governance, away from technical scale and toward human consequence. By focusing on the emotional and psychological impacts of chatbot interactions, the law recognizes that AI systems do not need massive compute budgets or national security implications to pose real-world harm.
For developers, SB 243 introduces a new compliance track distinct from traditional AI risk categories. Safety protocols, disclosure design, and mental health triggers are now core product concerns for systems that engage in social or emotional interaction.
For policymakers, the law illustrates how states are beginning to regulate AI based on context and use case, not just technical characteristics.
Legislogiq’s Insight
“As states take the lead in regulating emerging use cases like companion chatbots, this law sets an early benchmark for transparency, safety, and user protection”
SB 243 expands California’s AI policy leadership by entering a domain largely untouched by federal or international frameworks: emotionally intelligent AI designed for sustained, social interaction. Its provisions foreshadow a new wave of use-case-specific AI legislation. One that blends consumer protection, mental health regulation, and youth safety into the AI governance toolkit.
Legislogiq expects that this law will catalyze new policy efforts in other states where child safety and tech accountability are politically important. Additionally, the bill’s emphasis on disclosures, interaction boundaries, and crisis protocols will influence design standards across wellness and companionship AI sectors, even outside California.
Notably, SB 243 contrasts sharply with the Trump Administration’s federal approach, which prioritizes deregulation and innovation speed. California is once again prioritizing safety-by-design, this time through the lens of psychological protection and human interaction boundaries.
What You Can Do
SB 243 establishes new obligations for AI systems that simulate human companionship. If your organization develops, deploys, or operates companion chatbots:
Evaluate your system to determine whether it qualifies under the bill’s definition of a “companion chatbot.”
Implement disclosures that are clear, frequent, and tailored to user expectations, especially for minors.
Develop and publish protocols for identifying and addressing self-harm and suicidal ideation in user interactions.
Design for minor protections, including default break reminders and restrictions on explicit or manipulative content.
Track and prepare reporting data ahead of the July 2027 deadline for annual submissions to the Office of Suicide Prevention.
At Legislogiq, we help AI innovators turn compliance into competitive advantage. From mapping your AI footprint to crafting advocacy strategies around laws like California SB 53, our team helps you stay ahead of the regulatory curve while shaping how AI transparency evolves nationwide. Contact us and someone from our team will connect with you.