Bot or not: navigating California’s Bot Disclosure Law
August 2025 | FEATURE | RISK MANAGEMENT
Financier Worldwide Magazine
The customer-service provider dynamic, while once a straightforward practice, is today much more complex, requiring consistent engagement and trust in order to foster innovation and growth.
Due largely to greater digital engagement and a greater emphasis on data privacy and security, companies are increasingly utilising customer relationship management (CRM) systems to track interactions across platforms, while leveraging artificial intelligence (AI) to personalise customer experiences.
Moreover, this leverage is being augmented by recent advances in generative AI, particularly the use of bots – automated software applications capable of engaging in complex interactions via text and voice – which are now commonplace and considered an integral part of a CRM system.
Testifying to this ubiquity is analysis by Master of Code Global, which notes that while there is no precise single number for all companies using bots, the majority of studies indicate that a significant portion of businesses across various sectors are now employing bots such as chatbots.
According to Master of Code Global, approximately 60 percent of business to business and 42 percent of business to consumer companies use chatbot software. Moreover, fuelled by the increasing demand for 24×7 customer services and operational cost reduction, the chatbot market is set to expand at a remarkable 23.3 percent annually, reaching $15.5bn by 2028.
And yet, while bots becomes more commonplace, legal and ethical issues related to their use remain unresolved – chief among them whether it is necessary or advisable for service providers to disclose when customers are interacting with a bot.
The California Bot Disclosure Law
Prior to the revolution of advanced generative AI, in 2019, the state of California enacted a bot disclosure law – California Bot Disclosure Law SB 1001 – that requires bot deployers to disclose that users are interacting with bots in specific contexts.
“As the pervasiveness of bots in customer-service provider interactions expands, disclosure requirements are certain to evolve, driven by legislative developments and observation by the FTC.”
As the law reads: “[It is] unlawful for any person to use a bot to communicate or interact with another person in California online, with the intent to mislead the person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election”.
“California was aware early of technological developments,” affirms Patrick Henz, special adviser for compliance Latin America at Mitsubishi Heavy Industries America. “To protect the human, the law and its content is straightforward and to the point, requiring the users of an AI system, such as a chatbot, to know when they are communicating with a machine instead of a human.
“As such, understanding depends on the user’s experience,” he continues. “AI is always required to disclose this information at the beginning of an interaction. As defined by law, this is especially relevant if a company or other organisation aims to buy or sell from, or to, the user, or otherwise wants to influence, including voting at an election.”
Ensuring compliance
Given the myriad legal and ethical issues that can arise when bots are utilised as part of a transaction, companies are well-advised to take appropriate steps to ensure compliance with the California law and avoid associated penalties.
In the first instance, the California Bot Disclosure Law states that a bot must make a disclosure that is “clear, conspicuous, and reasonably designed to inform persons with whom the bot communicates or interacts that it is a bot”.
The law itself does not define “clear” or “conspicuous” but their interpretation is consistent with Federal Trade Commission (FTC) guidance. According to the agency, companies should consider the following factors to determine whether a disclosure is clear and conspicuous: (i) how close the disclosure is placed to where the claim is made; (ii) how prominent the disclosure is; (iii) how unavoidable the disclosure is; (iv) whether other parts of the ad pull attention away from the disclosure; (v) whether the disclosure needs to be repeated more than once; and (vi) whether the language used is understandable to the intended audience.
Without proper disclosure, the California Bot Disclosure Law expressly provides for enforcement by the state attorney general, who has broad enforcement authority to levy fines of up to $2500 per violation – which is not as negligible as it sounds given bots will interact with hundreds of users in a blink of an eye – as well as equitable remedies.
Thoughtful deployment
As AI technologies continue to advance and chatbots such as ChatGPT are at the vanguard, disclosure requirements will also need to evolve, particularly in an online world where bots are now virtually indistinguishable from humans.
“Humans have a tendency to humanise their environment,” suggests Mr Henz. “The more sophisticated the chatbot is, the greater this tendency becomes. In order to comply with the law, it may therefore be insufficient to have bot disclosure even at the beginning of an online conversation.”
As the pervasiveness of bots in customer-service provider interactions expands, disclosure requirements are certain to evolve, driven by legislative developments and observation by the FTC. In turn, companies need to deploy their bots carefully, both to ensure compliance and escape liability for inaccurate statements.
© Financier Worldwide
BY
Fraser Tennant