Why companies need an AI policy and what it should include
September 2025 | SPOTLIGHT | RISK MANAGEMENT
Financier Worldwide Magazine
As artificial intelligence (AI) tools increasingly become part of everyday business operations, companies face both immense opportunities and new risks. Investors and companies are pouring cash into the space, particularly generative AI (genAI). Some companies are investing tens or hundreds of millions of dollars or more into genAI and the training of large language models (LLMs). Other companies are building applications that leverage these LLMs or training their own models. Some companies are just using third-party AI tools.
Even if a company does not formally deploy AI solutions, it is likely that employees or contractors are already using AI-driven tools to create software, images, audio, videos, text and other content. Without proper guidelines, this use could result in liability, loss of proprietary rights, reputational harm or other adverse business consequences.
The rapid development and deployment of AI brings vast efficiencies and entirely new capabilities, but also introduces unique, evolving risks. From boardrooms to backend operations, AI’s influence stretches across nearly every facet of business, creating opportunities and ethical, legal and commercial questions that are anything but theoretical.
The unique aspects of genAI create a number of unresolved legal issues and various new legal risks. It is important to understand these issues and how to mitigate the risks, even if the law is uncertain. Dozens of lawsuits have been filed against various uses of genAI tools. A flurry of federal and state legislation is pending, and some new laws have been passed. Some recent decisions have provided a preliminary indication of how some courts are ruling on some of these issues. But the decisions are not all consistent and some of the issues are being or will be appealed.
Whether companies are building their own AI technology, training their own AI models or leveraging third-party tools, there are significant legal issues and business risks that companies need to consider as part of their corporate governance. Failure to do so leads to employees using tools without understanding the legal risks, which can expose the company to unnecessary legal risks.
As organisations invest millions into AI tools, regulators increase scrutiny and plaintiffs’ lawyers crank out the lawsuits, the stakes are clear: robust, actionable internal AI policies are a foundational governance requirement, not merely good practice but a fiduciary duty for leadership. For in-house counsel and board members, understanding both the ‘why’ and the ‘how’ of an AI policy means the difference between maximising value and exposing the organisation to existential risk.
For these and other reasons, companies must develop clear policies to guide the safe and legally compliant development and use of AI. AI policies are not ‘one size fits all’. Companies must tailor these policies to their own circumstances. And due to the rapidly changing legal and regulatory landscape, companies must regularly update their policies.
This article examines the indisputable reasons why companies must adopt comprehensive AI policies, explains some of the essential elements that such policies should address and highlights some of the risks that can be encountered if effective policies are not adopted.
Key risks that an AI policy can address
Below are some critical issues an AI policy should cover.
Copyright infringement. Training AI models on copyrighted materials can lead to infringement claims. The output of genAI models can also lead to copyright infringement issues.
Many lawsuits have been filed against AI tool developers for copying content to train AI. Some recent decisions have found the tool developers not liable for infringement where they structured their use to be deemed fair use.
Companies that are training AI models must ensure they have policies on collection and use of content used in the training to avoid infringement and maximise the likelihood of a fair use defence.
If the output of a genAI tool infringes, debate exists as to who is liable between the AI tool provider and the user who prompted the tool for the output. While most lawsuits have been filed against the tool providers, plaintiffs’ lawyers will begin suing companies that publish or otherwise share AI outputs that infringe on copyrighted works. Companies can take steps now to mitigate the likelihood of being sued or suffering a loss if they are.
Companies should only approve AI tools on a case-by-case basis after conducting technical and legal diligence on the tool. Companies should consider technical issues such as whether the tool includes technical safeguards (e.g., filters) to mitigate infringement. Additionally, some tool providers require users to indemnify the provider if the output infringes. Others indemnify the users. Many companies only approve tools for which indemnities are provided. These are not issues that employees will typically consider and one example of why companies must adopt policies.
Data privacy violations. Using certain data for AI training without appropriate permissions (including biometric data and personal information) poses significant privacy risks. Many companies with vast amounts of data that they legally collected over the years see the value in using this data to train AI. However, if the privacy policy under which that data was collected did not disclose that data would be used to train AI, doing so may be illegal. The penalties for such misuse are severe. The Federal Trade Commission has increasingly required companies to delete not only improperly used data, but also any AI models or algorithms trained on that data – a penalty known as ‘algorithmic disgorgement’. This can wipe out years of investment if data was used without proper authorisation or disclosure.
Companies training AI must develop policies to ensure that any AI training data is legally collected and, even if legally collected, that it has the right to use it for training purposes.
AI code generators and open source compliance. The use of AI code generators has significantly increased the productivity of, and even eliminated the need for many, software developers. Because these tools are trained on open source code, companies must understand the open source legal issues that can arise. This includes open source compliance, ensuring that software that includes the outputs of these tools is not itself subject to open source licences and assessing whether the removal of copyright management information violates applicable provisions of the Digital Millenium Copyright Act.
Companies using AI code generators must update their open source policies to account for these unique open source legal issues.
Loss of IP protection for AI-generated content
Current US copyright law generally does not protect content created by AI without meaningful human authorship. This means work generated largely by AI may not be eligible for copyright registration, reducing the ability to control or commercialise this content. US patent law limits the patentability of AI-assisted inventions. Many public AI platforms claim rights to use employee input data (and some require users grant an express licence for them to do so), meaning confidential business information shared with these tools could be made public or reused. Without appropriate safeguards, employees may unintentionally compromise trade secrets or sensitive information.
Companies’ AI policies must address these and other IP issues to maximise the ability for companies to protect the IP that is protectable and avoid unnecessary loss of IP rights.
Use of AI recorders and notetakers. Many employees are using AI recorders and notetakers. These tools significantly increase productivity but implicate several legal issues, including the need for notice and consent to record, protecting confidential and privileged information, issues with inaccuracies of transcripts and summaries, discoverability and document retention issues and much more.
Companies’ AI policies must address AI tools that create legal risks, including AI recorders and notetakers.
Bias, fairness and non-discrimination. Without proper development procedures, AI tools may lead to biased or discriminatory outputs. Some legislation imposes obligations on both developers and deployers of AI to ensure that use of the tool does not lead to biased or discriminatory outputs. Regular auditing of AI tools for bias and disparate impact is necessary, especially if used in decisions impacting employees, consumers, patients, students and other individuals.
Companies’ AI policies must address processes to avoid bias and discrimination in the development and deployment of AI.
Vendor and tool vetting and approval. In addition to standard technical diligence done when acquiring any third-party technology, companies must conduct enhanced diligence on AI-specific aspects of acquired tools.
Companies’ AI policies must address processes to ensure they conduct appropriate AI-related diligence on tools they are looking to acquire, including technical and legal aspects.
Conclusion
The foregoing are just some of the issues that need to be considered in developing AI policies. It is often advisable for companies to receive legal training based on their planned development and use of AI so they can develop comprehensive policies unique to their own circumstances.
James Gatto is a partner at SheppardMullin. He can be contacted on +1 (202) 747 1945 or by email: jgatto@sheppardmullin.com.
© Financier Worldwide
BY
James Gatto
SheppardMullin