All Initiatives

A.G. File No. 2025-025

December 12, 2025

 

 PDF Version


Pursuant to Elections Code Section 9005, we have reviewed the proposed statutory initiative related to artificial intelligence (AI) and child safety (A.G. File No. 25-0025, Amendment #1).

Background

AI and Social Media Platforms. AI refers broadly to technologies that enable computer systems to perform tasks that typically require human intelligence, such as generating content, identifying patterns, or making predictions from data. Many online services, including social media platforms, incorporate AI tools into their products. For example, AI may be used to recommend content, personalize user experiences, or generate text, images, or audio in response to user input. The extent to which platforms use AI varies depending on the type of service offered, the scale of the business, and the data they collect.

Existing State Privacy Law Includes Certain Protections for Minors. State law requires businesses to obtain explicit consent before selling or sharing the personal information of minors under age 16. For children under age 13, parental consent is required. This includes information related to a minor’s online activity and images of the minor.

State Requirements for Operators of Companion Chatbots. Recently enacted state law, effective January 1, 2026, establishes certain requirements for operators of “companion chatbots,” particularly those accessible to minors. Specifically, this law:

  • Defines Companion Chatbot. A companion chatbot is defined as an AI system with a natural language interface that provides adaptive, human-like social interactions and can maintain a relationship over multiple interactions. The law excludes certain chatbots, such as those used solely for customer service, business operations, video game interactions limited to game-related topics, and stand-alone voice-activated virtual assistants that do not maintain relationships across interactions.
  • Mandates Operator Requirements. Operators of companion chatbots must follow specified safety and disclosure practices. These include clearly informing users that they are interacting with an AI system, implementing safeguards to prevent harmful or sexually explicit content, establishing protocols for responding when a user expresses self-harm or suicidal ideation, and reporting annually to the state on various matters.
  • Creates a Private Right of Action. Individuals injured by a violation of the law’s requirements may bring a civil action seeking remedies including injunctive relief (such as stopping a particular activity), damages equal to the greater of actual damages or $1,000 per violation, and reasonable attorneys’ fees and costs. Civil actions can include lawsuits in which one party seeks compensation, including damages, for injury caused by another party. These cases can be filed individually or as class-action lawsuits.

Instructional Quality Commission (IQC) Advises the State Board of Education on Curriculum Frameworks and Instructional Materials. The IQC consists of 18 members, primarily appointed by the Governor, with expertise in academic content and curriculum. The IQC develops curriculum frameworks aligned with the state’s academic content standards and reviews and recommends instructional materials for kindergarten through eighth grade to the State Board of Education. Recent state law requires the IQC to consider incorporating AI literacy into the mathematics, science, and history-social science curriculum frameworks the next time they are revised.

Schools Required to Adopt a Policy Limiting Smartphone Use. Recent state law requires the governing boards of school districts, charter schools, and county offices of education, to adopt a policy limiting or prohibiting the use of smartphones on school sites by July 1, 2026. The policy must include exceptions for emergencies or dangerous situations, medical necessity, individualized education programs, and specific educational purposes when permitted by school staff.

Proposal

Revises and Expands Definition of Companion Chatbot. Among other changes, the measure eliminates current-law exclusions for video game chatbots and stand-alone voice-activated virtual assistants from the companion chatbot definition.

Raises the Age Threshold for Consent to the Sale or Sharing of Personal Information. The measure prohibits businesses from selling or sharing the personal information of consumers under age 18 without the consumer’s explicit consent. For consumers under age 13, parental consent is required.

Prohibits Certain AI Products From Being Made Available to Children. The measure prohibits businesses and other entities from making “covered products”—AI products intended to be used by children, used to process children’s personal information, or applied directly to children—available to users under age 18, if the product is determined to pose an “unacceptable risk.” AI systems that collect or process a child’s biometric data (like fingerprints or facial structure), assess the emotional state of a child outside a limited medical context, or scrape images of children's faces from the internet or surveillance footage without parental consent are deemed to pose an unacceptable risk. Similarly, companion chatbots capable of any number of specified activities, such as encouraging a child user to engage in various harmful behaviors, are deemed to pose an unacceptable risk and are prohibited.

Establishes New State Regulatory Structure for Certain AI Products. The measure requires the Department of Justice (DOJ) to develop various regulations for state oversight of certain AI products. The measure refers to this regulatory structure as the “Child AI Safety Audit Mandate.” Some regulations focus on individual products, including criteria for determining which products are covered and classifying their estimated risk of negatively impacting children as low, moderate, high, or unacceptable, based on factors such as content exposure and potential emotional or behavioral influence. DOJ must also develop a process for assessing “high-risk” products both before and after deployment. Other regulations focus on independent monitoring, including certifying independent auditors to evaluate product safety, conduct audits, and submit findings to DOJ. DOJ must also develop and maintain a public database that provides information on regulated AI products, including summaries of submitted audit reports and any third-party complaints received. The measure allows DOJ to charge registration fees to cover the costs of these regulatory activities.

Allows State and Private Individuals to Seek Monetary Awards. The measure allows DOJ to seek monetary penalties of $25,000 for each violation of the above-noted prohibition and safety mandate requirements, as well as reasonable attorney fees. The measure also allows children (or their parent or guardian) who suffer “actual harm,” including serious injury or death, to seek monetary damages from AI and social media companies that fail to comply with these requirements. In addition, the measure lays out damages for cases brought against AI and social media companies if they fail to take appropriate steps to avoid injury to children under 18 years of age. Damages in both of these cases may include the greater of $5,000 per violation (capped at $1 million per child) or three times the amount of actual damages, such as medical costs and pain and suffering. The measure also allows the courts to award reasonable attorney fees and other damages. These damages are cumulative with any other available remedies that might be sought under current law (including the private right of action provided under the state’s recently enacted companion chatbot law).

Requires IQC to Review AI Literacy Content in the State’s Curriculum Frameworks. The measure requires the IQC to review AI literacy content within three of the state’s curriculum frameworks—math, science, and history-social science—and recommend revisions every three years. It also requires that any approved AI literacy content be made available for free on the State Board of Education’s website.

Requires Schools to Ban Use of Internet-Enabled Devices During Instructional Time. The measure requires school governing boards to develop and adopt a policy prohibiting students from using all internet-enabled devices, including smartphones, during school time by July 1, 2027. The measure includes exceptions for the use of such devices in specified circumstances and requires the policy to include at least one method for parents and guardians to contact their students while internet-enabled devices are prohibited.

Creates a Children’s AI Safety Fund. The measure establishes the Children’s AI Safety Fund to support state oversight and implementation activities. All monetary penalties collected by DOJ and registration fees paid by AI developers would be deposited into the fund. Monies in the fund would be allocated in the following order: (1) offsetting costs incurred by the courts and DOJ in carrying out enforcement duties; (2) offsetting costs incurred by the California Department of Education to review and update AI literacy curriculum content and instructional materials; (3) supporting implementation of the smartphone-free classrooms policy; and (4) administering and funding grants to protect and prepare children to use AI safely.

Fiscal Effects

The fiscal effects associated with this measure, described below, are subject to uncertainty. These effects would depend on how the measure is legally interpreted, how the DOJ implements its new regulatory responsibilities, the number and types of civil cases filed and their outcomes, and how affected entities respond to the measure.

Increased State Regulatory and Enforcement Costs. This measure would increase state costs to DOJ and the courts. DOJ would experience increased costs for regulating certain AI products and maintaining a public database that provides specific information on these products. DOJ costs could also increase to the extent that it files cases in state courts seeking penalties against AI developers or their products. Similarly, the state courts could experience increased workload and costs to process the cases filed by DOJ as well as children (or their parents or guardians) seeking monetary damages for harm they suffered. These increased costs are uncertain but could range in the tens of millions of dollars annually. Some or all of these costs would be offset by regulatory fees or civil penalty revenue won by DOJ.

Increased Education Costs for the State and Public Schools. The measure would increase state costs for the IQC to update certain curriculum frameworks every three years. These costs would vary by year but likely would average hundreds of thousands of dollars to a couple of million dollars annually. In addition, public schools could incur some additional costs to implement policies that ban use of internet-enabled devices during instructional time and establish other provisions, such as an option for parents and guardians to contact their students while internet-enabled devices are prohibited. These costs are uncertain and would depend on how schools choose to implement these requirements. Some or all of these costs would be offset by regulatory fees or civil penalty revenue won by DOJ.

Other Fiscal Effects. This measure could impact the taxes paid by technology companies. California-based companies could make changes to their operations in response to the measure’s requirements. For example, some companies could limit access to certain AI features or restrict use by individuals under age 18. If these decisions change the profits of these companies or how many people they employ in California, it would affect state and local tax revenues. Whether this would occur is uncertain.

Summary of Fiscal Effects. We estimate that the measure would have the following major fiscal effects:

  • Increased state regulatory and enforcement costs potentially in the tens of millions of dollars annually to regulate certain AI products related to children and process court cases seeking monetary awards allowed by the measure for violation of its provisions. Some or all of these costs would be offset by regulatory fees or monetary awards received by the state.