All Initiatives

A.G. File No. 2025-036

January 27, 2026

 

 PDF Version


Pursuant to Elections Code Section 9005, we have reviewed the proposed statutory initiative related to artificial intelligence (AI) and child safety (A.G. File No. 25-0036, Amendment #1).

Background

AI and Consumer-Facing Applications. AI refers broadly to technologies that allow computer systems to perform tasks that typically require human intelligence, such as generating content, identifying patterns, or making predictions from data. Many companies use AI in products designed for consumers. These products range from tools built for specific purposes to more general systems that can hold conversations with users on a wide range of topics.

State Privacy Law Includes Certain Protections for Minors. State law limits how businesses may sell or share the personal information of minors. Specifically, businesses must obtain explicit consent before selling or sharing the personal information of minors under the age of 16. For children under the age of 13, parental consent is required. These protections apply to information related to a minor’s online activities, images, and other personal data.

State Law Establishes Age Assurance Requirements. State law, effective January 1, 2027, establishes age assurance requirements in certain online settings where a user’s age is relevant to legal compliance, such as restrictions on minors’ access to certain content or services. Under this law, certain operating system providers and application stores must provide age-range signals—information indicating a user’s general age category (for example, under 13, 13-17, or 18+)—that application developers can use to apply age-based requirements.

State Requirements for Operators of Companion Chatbots. State law, effective January 1, 2026, establishes requirements for operators of companion chatbots—AI systems that use natural language to provide ongoing, human-like social interactions. Under the law, operators must meet specific safety and disclosure requirements. These include informing users that they are interacting with an AI system, taking steps to reduce the risk of harmful or sexually explicit content, and maintaining protocols to respond when a user expresses self-harm or suicidal ideation. Violations of these requirements may be enforced through civil actions, including lawsuits by harmed individuals seeking monetary damages and other relief.

Proposal

Establishes Child Safety Requirements for Certain AI Systems. The measure establishes new child safety requirements for providers of “covered AI systems,” generally defined as consumer-facing, conversational AI systems (including companion-style chatbots), while excluding business-only and certain narrow-purpose AI tools. For example, under the measure, providers must implement technology to estimate whether a user is a child (under the age of 18) or an adult and, when a user’s age cannot be estimated, to apply default protective safeguards. Providers must also annually review child safety risks of covered AI systems, including risks that the systems could increase the likelihood of self-harm or suicide, and take reasonable steps to reduce those risks. In addition, the measure requires providers to publish and update a child safety policy, maintain response protocols for situations involving self-harm or suicidal ideation, and implement safeguards to reduce the likelihood that AI systems generate harmful content for children. Providers must also offer parental controls that allow parents to manage certain aspects of a child’s use of the system, such as time limits and the use of a child’s data for training. Finally, providers must undergo annual independent audits at their own expense and submit audit reports to the Department of Justice (DOJ).

Prohibits Certain Providers From Engaging in Specified Advertising, Data Practices, and Deceptive Design. The measure prohibits providers of covered AI systems made available to children from engaging in certain practices. Specifically, it prohibits child-targeted advertising and limits the sale or sharing of the personal information of children under the age of 18, unless verifiable parental consent is obtained. The measure also prohibits the use of design practices that interfere with a child’s or a parent’s ability to find, understand, or use safety features, privacy controls, or parental controls.

Increases State Regulatory and Audit Oversight. The measure directs the DOJ to adopt regulations to implement and administer its requirements. These regulations would address independent audits of covered AI systems, including auditor standards, audit scope and methodology, and procedures for submitting and reviewing audit reports. The measure also requires the DOJ to establish a mechanism for third parties to report child safety incidents. In addition, the DOJ must publish an annual report summarizing overall audit findings and trends and create publicly accessible resources that provide information about covered AI systems’ child safety policies and parental controls.

Authorizes DOJ to Seek Civil Penalties. The measure allows the DOJ to enforce its requirements through civil actions. The DOJ may impose civil penalties of up to $1,000 per violation for failures to implement required safeguards, and up to $10,000 per violation for willful violations or the submission of false or misleading information.

Fiscal Effects

Increased State Regulatory and Enforcement Costs. The measure would increase state costs, primarily to the DOJ. DOJ would incur costs to develop and adopt regulations, review audit reports, create and maintain publicly accessible information, and carry out enforcement activities. These responsibilities would require additional legal, technical, and administrative resources. The measure could also increase workload and costs for the courts to the extent DOJ brings civil actions authorized by the measure. The total level of these state costs would vary year-to-year but would likely range from the millions to tens of millions of dollars annually.

Other Fiscal Effects. The measure could have additional fiscal effects on state and local governments. Enforcement actions could generate civil penalty revenue for the state, though the amount of any such revenue is uncertain and would depend on the number and severity of violations and the outcomes of enforcement actions. The measure could also affect state and local tax revenues by influencing how companies that offer covered AI systems operate in California. For example, some companies may change their product design, limit certain features for children, or increase spending on compliance to meet the measure’s requirements. To the extent these changes affect company profits, investment decisions, or employment levels in California, state and local tax revenues could be affected.

Summary of Major Fiscal Effects. We estimate that the measure would have the following major fiscal effects:

  • Increased state regulatory and enforcement costs that would likely range from the millions to tens of millions of dollars annually to implement and enforce the measure’s child safety-related regulatory and enforcement requirements.