Blog
Harness the Potential of GenAI and Mitigate the Potential Legal Risk, Part 1: A Risk-Aware Approach
This article focuses on the risks associated with inputting data into GenAI tools and models. The outputs of GenAI systems also create risks; we will explore this in Part 2 of this series.
A Risk Aware Approach to Data, Legal Governance, and Insurance
The transformative power of artificial intelligence (AI) is incredible, and enterprises of all sizes and industries are finding practical applications and innovations using AI tools.
The business use case for generative artificial intelligence (GenAI) and large language models (LLM) like OpenAI’s ChatGPT is compelling. Businesses can use GenAI to automate, augment, and accelerate a variety of workflows. For example, many retail businesses are using GenAI technologies to create personalized customer experiences or using chat bots to engage with their customers. GenAI is broadly used in many industries for content creation and marketing output. Many businesses are using the enhanced behavior recognition capabilities for fraud detection and prevention.
But as the use and development of GenAI grows, so have the potential risks. Users of GenAI tools and applications should be concerned about the risk of leakage of sensitive information, inadvertent violations of privacy rights, and the improper usage of protected intellectual property in training models.
Adding to the concerns, several technology companies that create GenAI tools have been subject to allegations that they trained their foundational models using unauthorized and improper proprietary data in violation of copyright, contract, and privacy laws.
In appreciating the risk, many companies may be hesitant to adopt GenAI technologies. However, implementing a strong risk management program and obtaining the right insurance can strike a balance and minimize legal risk, allowing for the use of GenAI technologies. In this article, we provide guidance and cover the following activities as part of a risk-aware approach to GenAI:
• Understand protections and limitations of the end-user license agreement indemnity clause.
• Use licensed data to fine-tune the model.
• Employ data protection principles when fine-tuning or prompting model or tool.
• Manage the GenAI applications that can be used in the corporate network.
• Obtain an insurance solution that addresses the privacy risk.
1. Understand the Protections and Limitations of the End-User License Agreement Indemnity Clause
As a response to the legal challenges surrounding GenAI tools, many of the large technology companies including Google, Microsoft, Shutterstock, OpenAI, IBM, and Adobe, among others, offer indemnification to their users for claims of copyright infringement arising from the use of some of their foundational models.
A shift in the risks or potential costs from the user to the developer through indemnification or a warranty can provide a measure of comfort in using a GenAI tool; users should confirm that their end-user license agreement provides this protection and understand what, if any, conditions are associated with it.
- To begin, review the end-user agreement and confirm that the indemnity grant is available as many startups or smaller GenAI providers may not offer this protection.
- Additionally, ensure the type of user account or license qualifies for the protection. Indemnity is typically unavailable for free accounts.
- Next, ensure that the application and/or any planned modifications to the tool or model does not violate the conditions for indemnity protection. The scope and conditions of the warranty or indemnity protection can vary significantly. There are often restrictions for specific user prompts or uses of the model or tool.
Note that jailbreaking, modifying, or bypassing software restrictions implemented by the developer of the tool or model will likely void any warranty or indemnity protection that would be otherwise available.
Users are urged to review their application of the technology and propose business use cases with the end-user license agreement. An attorney should be consulted to ensure that guardrails are in place and indemnity protections or warranties are available.
2. Use Licensed Data to Fine-Tune the Model
Training or fine-tuning GenAI models or tools is a powerful way to increase accuracy and customize models to fit a particular business use case or function.
Fine-tuning involves adding specialized knowledge to an AI model that already understands basic patterns. The model is then further refined with specialized datasets to calibrate the model to perform a particular task or function.
When fine-tuning the model, a company should scrutinize the dataset used and determine if any copyrighted or licensed information is being used. When users train the model with data that is copyrighted or otherwise subject to license terms or other restrictions on its use, it creates a substantial legal risk.
The most effective way to manage copyright and licensing risks when fine-tuning is to use data that the user has obtained a license to use specifically for training a GenAI model.
3. Employ Data Protection Principles When Fine-Tuning
Through the normal course of business, many companies amass large quantities of valuable data. Companies can use empirical business intelligence and data to train or fine-tune AI models to solve a variety of business problems.
However, building and enforcing policies that safeguard trade secrets, confidential information, or personal information is essential to a GenAI risk management program. Accordingly, data should be validated and reviewed to confirm that it does not contain confidential, sensitive, or protected information before being used to fine-tune or as a prompt for any GenAI tool or model.
Some strategies include:
- Using data validation tools to verify data to conform to predefined standards , including preventing the sharing of protected sensitive and personal information.
- Controlling the data that can be shared, by whom, and to which GenAI tools or models.
- Auditing the data that is being shared with GenAI tools or models based on privacy, sensitivity, regulation, and access.
- Building an alert and report mechanism for instances when policies are breached.
4. Manage the GenAI-Enabled Applications Used on Your Corporate Network
Enterprising employees may be tempted to experiment using free GenAI tools to boost their productivity. One department may deploy an AI-enabled tool to increase the efficiency of an internal process. However, the unsanctioned use of software, hardware or systems without the knowledge or oversight of IT or the CISO —known as shadow IT—can be dangerous.
The danger of shadow IT is amplified with GenAI tools, as various free and paid GenAI tools explicitly state in their end-user license agreements that inputs may be used to train the platform's creation model. Meaning that if a user inputs sensitive, legally protected, or proprietary data, that information is now available to all as it is fed into the overarching public technology company tool. Unbeknownst to corporate leadership, the use of a free AI tool could inadvertently create a vast data leakage that could impact internal and external trust as well as violate confidentiality agreements and privacy laws.
Without appropriate oversight, shadow IT presents a substantial legal risk. Companies should work with their CISO or a cyber security consultant to establish a policy and mechanism to discover and monitor shadow IT within the organization.
5. Obtain a Risk Transfer for Errors in the Execution of the Risk Management Plan Through a Well-Crafted Insurance Policy
While effective risk management and governance is paramount for the responsible use of GenAI tools, mistakes in the application of the risk management plan, a change in regulation, or simply an unforeseen event could lead to risk. For those situations, a well-placed cyber, technology and media policy can provide a vehicle to transfer the risk.
Cyber insurance policies are designed to provide a risk transfer for, among many things, first-party losses and third-party liabilities arising out of privacy incidents or event. This typically means the failure to prevent unauthorized access, unauthorized use, unauthorized disclosure, theft, or misappropriation of protected information within a computer network.
Risks associated with inadvertent data leakage or unintentional data sharing that do not arise in a cyber incident or attack are often excluded from cyber insurance policy. However, as privacy risk has evolved, some carriers offer affirmative coverage for what is known as “wrongful collection” or allegations of unintentional unlawful collection or use of personal data. These coverage grants are designed to provide risk transfer for violations of statutes like the Video Privacy Protection Act or the General Data Protection Regulation but should be broad enough to encompass inadvertent data leakages that may violate any new AI legislation in the future. It is imperative that the privacy coverage grant include not only defense costs, but also damages, regulatory fines, and penalties. It is essential that companies work with their broker to find affirmative coverage suited to their AI-related data risk.
Next Steps for Companies Considering the Use of GenAI Tools
When considering the use of GenAI:
- Legal counsel should carefully evaluate end-user agreements to understand the breadth and application of any available warranties or indemnity agreements through the lens of the technology's use case.
- A business’ privacy committee should review proposals to train or prompt GenAI with company data, as it raises risks related to copyright infringement, confidentiality (and related considerations), and privacy laws and regulations.
- Businesses should work with their insurance broker to obtain insurance coverage from the liabilities that arise from the evolving nature of this risk.
Final Thoughts
As both the potential and the risks of AI come into focus, new legislation, obligations, and challenges are likely to emerge. Adopting an effective AI risk management program will enable businesses to take advantage of these new opportunities while managing the legal risk.
Author
Table of Contents