Blog

Artificial Intelligence: Practical Considerations for Boards and Management Teams

Artificial intelligence poses great opportunities for companies—as well as significant risks and challenges. We'll cover what directors and officers need to know as their companies begin to use AI.
Artificial intelligence is coming for all of us—and Boards of Directors will want to lean forward into the issues. As companies consider implementing artificial intelligence initiatives, staying on top of the evolving risks, litigation trends, and the changing legal landscape will be a priority. In this week’s blog, my colleague Lenin Lopez sheds light on these issues and offers a few strategic considerations to help in navigating what is proving to be a legal and regulatory minefield.

– Priya Huskins

Artificial intelligence (AI) poses great opportunities for companies—as well as significant risks and challenges. While leveraging AI in the context of business applications isn’t necessarily a novel idea, the introduction of generative AI tools, like ChatGPT, is and has caught the attention of Wall Street. According to a 2021 analysis by Accenture, among executives of the world’s 2,000 largest market cap companies, those who discussed AI on their earnings calls were 40% more likely to see their company share prices increase. This was up from 23% in 2018.

Fast forward to this latest earnings cycle and artificial intelligence continues to be a topic of discussion on earnings calls. Reuters revealed that companies reporting earnings in recent weeks brought up AI even more than in the previous quarter. According to the article, on earnings calls, executives and analysts mentioned AI 58 times, up from 15 mentions in previous calls in April.

The introduction of more advanced AI and increased adoption by companies has created an inflection point for directors and management teams, especially in terms of how they may want to consider evaluating and assessing AI-related initiatives and risk.

This article will:
  • Briefly explain the difference between traditional AI and generative AI
  • Describe notable AI-related risks, as well as associated litigation and enforcement trends
  • Discuss some of the legal and regulatory frameworks (or lack thereof) governing AI
  • Share practical considerations for boards and management teams

White robot cyborg hand pressing a keyboard on a laptop

Traditional AI versus Generative AI

As noted above, employing AI as a business tool is nothing new. Think support chatbots, supply chain optimization, quality control, or even systems that recommend clothing styles to customers. These are examples of traditional AI, which relies on predefined rules to make decisions and/or perform tasks. As a result, traditional AI is typically limited in scope to the task it was developed to address. For instance, the AI developed to power my favorite chess application wouldn’t be able to assist in product development or drug discovery.

This is where generative AI comes in. Generative AI isn’t limited by predefined rules to make decisions. It can harness large pools of data to propose new approaches to problems and produce unique content. As an example, in the pharmaceutical space, generative AI could conceivably analyze clinical trials data from failed programs to develop new trial designs. Generative AI even has the potential to simulate clinical trials. Generative AI might also be able to assist in generating legal and compliance documents, but as one lawyer learned, using generative AI can be fraught with risk. For the remainder of this article, unless otherwise noted, AI will be used to describe both traditional AI and generative AI. See this article from the US Government Accountability Offices for more information regarding generative AI.

AI-Related Risk Factors

The potential risks associated with the use of AI include, among others, the lack of transparency in underlying assumptions that AI models are built on, the introduction of bias and discrimination into decision-making, privacy violations, social manipulation, and the next financial crisis. For insights into how some public companies are viewing AI-related risks in the context of their business, one need only look at annual reports filed with the US Securities and Exchange Commission.

Weil, Gotshal & Manges recently reported on the increase in discussions and disclosures at public companies about AI-related risks. According to the report, more than 100 companies in the S&P 500 included risk factor disclosures related to AI in their quarterly and/or annual reports. Some of the disclosed risks relate to increased cybersecurity risks, flawed AI algorithms, reputational harm, competitive harm, legal liability, and challenges associated with compliance with evolving AI laws and regulations.

Some companies have already publicly acknowledged the risks that AI poses to their business, financial results, reputation, and credibility.

What follows are a few good examples of AI-related risk factor disclosure from companies that have either used AI in their business, are contemplating it, or are simply addressing AI-related risk.

  • Microsoft: Addressing AI risk in the context of cybersecurity vulnerabilities

    “Increasing use of generative AI models in our internal systems may create new attack methods for adversaries. Our business policies and internal security controls may not keep pace with these changes as new threats emerge, or emerging cybersecurity regulations in jurisdictions worldwide.”

  • GE HealthCare: Addressing AI risk associated with certain of its product offerings

    “We are building AI into many of our digital offerings, which presents risks and challenges that could affect its acceptance, including flawed AI algorithms, insufficient or biased datasets, unauthorized access to personal data, lack of acceptance from our customers, or failure to deliver positive outcomes. These deficiencies could undermine the decisions, predictions, or analysis AI applications produce, as well as their adoption, subjecting us to competitive harm, legal liability, regulatory actions, and reputational harm. In addition, some AI scenarios present ethical, privacy, or other social issues, risking reputational harm. We have safeguards designed to promote the ethical implementation of AI but these safeguards may not be sufficient to protect us against negative outcomes.”

  • Yext: Addressing AI risk associated with certain of its product offerings

    “We have incorporated a number of generative AI features into our products. This technology, which is a new and emerging technology that is in its early stages of commercial use, presents a number of risks inherent in its use. AI algorithms are based on machine learning and predictive analytics, which can create unintended biases and discriminatory outcomes. We have implemented measures to address algorithmic bias, such as testing our algorithms and regularly reviewing our data sources. However, there is a risk that our algorithms could produce discriminatory or unexpected results or behaviors (e.g., hallucinatory behavior) that could harm our reputation, business, customers, or stakeholders. In addition, the use of AI involves significant technical complexity and requires specialized expertise. Any disruption or failure in our AI systems or infrastructure could result in delays or errors in our operations, which could harm our business and financial results.”

  • ResMed: Addressing AI risk associated with certain of its product offerings

    “Certain of our products and services include the use of artificial intelligence (AI), which is intended to enhance the operation of our products and services. AI innovation presents risks and challenges that could impact our business. AI algorithms may be flawed. Datasets may be insufficient or contain biased information. Ineffective AI development and deployment practices could subject us to competitive harm, regulatory action, increased cyber risks and legal liability, including under new proposed AI regulation in the European Union. The FTC has issued a report expressing a concern regarding AI and bias across industry sectors, including in the healthcare space, and has suggested that such bias could lead to unfair and deceptive practices, among other concerns. Any changes to our ability to use AI or concerns about bias could require us to modify our products and services or could have other negative financial impact on our business.”

  • Takeda: Addressing AI risk associated with certain digital technologies

    “The increased use of digital technologies involving personal data, such as mobile health apps, wearables, digitalization of clinical trials or artificial intelligence tools deployed on personal data pose additional risks for our company both in terms of the larger volume of personal data we handle but also in terms of potential security threats of such technology and our ability to assess the deployment of each technology because of the sheer volume and speed at which they are being developed. Compliance with existing, proposed and recently enacted laws and regulations can be costly; any failure to comply with these regulatory standards could also subject us to legal and reputational risks. Misuse of or failure to secure personal information could also result in violation of data privacy laws and regulations, legal proceedings against us by governmental entities or others or damage to our reputation and credibility and could also have a negative impact on our company results.”

AI Litigation Trends

Some companies have seen AI-related risks materialize and progress to litigation. While the two cases discussed below, one related to Cigna and the other related to iTutorGroup, fall more into the category of a company getting in hot water for alleged misuse of traditional AI, generative AI has already been the source of several lawsuits, including a few involving privacy violations, as well as copyright and trademark infringement. See this article from K&L Gates for as summary of recent trends in generative AI litigation in the US.

  • Cigna was hit with a class-action lawsuit alleging that the health insurance company used a computer system whose algorithm wrongfully denied members claims for reimbursement of medical expenses. The computer program reviewed thousands of patient health insurance claims, which historically were the responsibility of individual human reviewers. The computer program was more likely to reject claims as a matter of course. Allegedly, doctors were then rubber-stamping those denials without reviewing the individual claims. This practice, if true, runs afoul of a California law that requires insurers to give each claim a “thorough, fair, and objective investigation.”
  • iTutorGroup, an online tutoring company, entered into a $365,000 settlement agreement with the US Equal Employment Opportunity Commission (EEOC) for violating the Age Discrimination in Employment Act (ADEA), which prohibits employers from discriminating based on age. At issue was iTutorGroup’s action to program its tutor application software to automatically reject female applicants aged 55 or older and male applicants aged 60 or older. The company rejected more than 200 qualified applicants based in the US because of their age.

Based on how things have been trending, it stands to reason we are not far off from seeing securities class action lawsuits involving stock drops resulting from significant AI mishaps. In those cases, duty of oversight claims against directors will likely follow.

AI’s Legal and Regulatory Framework

The AI legal and regulatory landscape is still in its nascency. What we currently have are existing laws and regulations that are being applied to rapidly changing technology. As world leaders, governments, and lawmakers continue to grapple with how best to regulate the use of AI, companies run the risk of starting AI projects, getting far down that path, and then having to change course to avoid running afoul of new laws, regulations, or standards that may reveal themselves in the future.

Companies would be wise to remain nimble and apprised of legal and regulatory developments. As a harbinger of what we should expect in the US, state AI laws proposed in the 2023 legislative session surpassed the number of AI laws proposed or passed in prior sessions. In addition, the use of AI in healthcare has garnered the attention of state legislators who are considering legislation to regulate the practice. Consider too that California’s governor signed an executive order focused on understanding how to govern AI. As one might expect, the situation outside of the US is similar, just on a larger scale. The challenge for companies, especially multinationals, is that they must contend with multiple legal and regulatory frameworks, which in many cases, may not be consistent.

For a state-by-state AI legislation snapshot, see this resource from Bryan Cave Leighton Paisner LLP. For a global AI legislation tracker, see this resource from the International Association of Privacy Professionals.

Practical Considerations for AI Risk Management

The economic potential of AI and more specifically, generative AI, creates an opportunity that most companies will want to consider, if they haven’t already. What follows are a few practical considerations for companies, boards, and management teams as they consider leveraging AI technology in their operations.

  1. Ensure that the proposed use of AI aligns with the company’s business strategy. The company’s decision of whether to utilize AI may best be contemplated within the broader assessment of the company’s business strategy. Given the multifaceted nature of risks associated with the use of AI, a business unit lead or marketing leader should not be the sole driving force and/or decision maker. Rather, this decision might best be made at the management level in consultation with the board, like how any other significant strategy initiative is considered and approved.
  2. Assess AI expertise and strengthen it as needed. Companies will want to consider evaluating whether their boards and management teams possess relevant knowledge of AI so they are positioned to appropriately evaluate AI initiatives. At the board level, a working knowledge of AI is critical in being able to effectively carry out their fiduciary responsibilities, including the duty to provide oversight of company operations. In the case of management teams that are developing AI initiatives, those management teams that have AI expertise are better positioned to navigate the complex legal and regulatory landscape that we have touched upon in this article. Companies will want to consider engaging external advisors to fill any gaps, as well as facilitate training, as needed. Whether companies should hire individuals and/or appoint directors that have AI-related expertise will depend largely on the company. Certainly, chief AI officers and director trainings specifically focused on AI are becoming more common now than they were just a few years ago. For more on this topic, consider Deloitte’s recently published article answering the question: “Does your company need a Chief AI Ethics Officer, an AI Ethicist, AI Ethics Council, or all three?”
  3. Establish an AI governance framework. In tandem with developing AI initiatives, companies may want to consider developing governance frameworks that identify the roles and responsibilities of the board and management. For boards, it may be a matter of approving initiatives by the full board, delegating oversight of AI risks to the audit committee and, like Microsoft, delegating oversight of the responsible use of AI to a different board committee. At the management level, it may make sense to identify a group of leaders to be the primary drivers of implementing the company’s AI initiatives, while also serving as the primary point of contact for the board when it comes to AI governance. As a reminder, given the multifaceted nature of AI-related risk, companies may be best served by establishing a cross-functional group of leaders (e.g., compliance, legal, information security, business unit leaders, etc.).
  4. Assess and manage AI-related risk. Whether the AI-related risk being considered is legal, regulatory, ethical, or reputational, boards and management teams will need to have an appreciation of these risks to be able to make informed decisions concerning AI. For companies with mature enterprise risk management functions, it’s likely that AI has already made its way onto risk dashboards. That said, for those companies looking to enhance their current AI-related risk assessment processes or for companies curious about where to start, the US Department of Commerce’s National Institute of Standards and Technology (NIST) released its Artificial Intelligence Risk Management Framework, which is a guidance document for voluntary use by organizations designing, developing, deploying, or using AI systems to help manage AI-related risks.
  5. Incorporate AI into broader cybersecurity risk management processes. This one is worthy of being called out as its own item. While AI promises to increase the effectiveness and speed at which your company can operate, it also opens your company up to more cybersecurity risk. Consequently, companies should ensure that AI initiatives are being considered within the broader context of their cybersecurity risk management processes. As a reminder, companies would be well served to work with their insurance broker to confirm that their cyber liability insurance coverage contemplates current AI-related risks they are facing as well as the potential AI-related risks associated with any planned AI initiatives. Recall too the SEC’s new cyber disclosure rules up the ante when it comes to reporting any cybersecurity incidents.
  6. Develop an AI incident response plan. Most companies are familiar with cybersecurity incident response plans. That is, a plan that outlines the company’s predefined actions and procedures that it will follow to detect, mitigate, and recover from cybersecurity attacks. An AI incident response plan wouldn’t be all that different. It is a plan that a company would be able to leverage in the case of an AI-related compliance breach or controversy, including how the company may want to avoid and/or mitigate reputational damage. See this article from Debevoise & Plimpton for more information on the value of AI incident response plans.
  7. Ongoing board and management training. Companies may want to offer their boards and management teams AI-related training opportunities. The degree to which boards and management teams may want to participate in AI-related trainings will depend on the company and the individuals, but being able to point to this type of training will help to bolster the proposition that AI initiatives were approved and monitored on an informed basis. Nowadays, finding an AI educational session, seminar, or panel isn’t hard. Consider, for example, the AI symposium that the Stanford Rock Center for Corporate Governance is hosting on November 13, 2023. It is designed to help boards and senior leaders gain knowledge and insights into AI. Topics to be discussed include many of the same ones included in this article, but at a more granular level.
  8. Train employees. For employees involved in the development and implementation of AI initiatives, companies should ensure they appreciate the legal, regulatory, and compliance requirements. Companies may also want to create a process by which these individuals are able to quickly elevate potential issues or ask questions. These steps will help to create a culture of compliance around your AI initiatives. Separately, all companies may want to consider adopting policies outlining their position on employee use of AI, in particular generative AI. See this article for a discussion of the security risks associated with employee use of generative AI. For companies that haven’t yet adopted a generative AI use policy, see this article from Perkins Coie for consideration in developing one.
  9. Remain mindful of the costs associated with AI initiatives. AI initiative costs will inevitably include costs not directly related to the implementation. For example, a company may look to leverage AI in connection with their financial reporting systems. In addition to considering the cost of funding that new system, companies will also want to take into account the cost of investing in more robust risk assessment, governance controls and procedures, and monitoring. Developing budgets for the adoption of AI initiatives will be a frustrating and incomplete process unless ancillary costs and related resource allocation issues are considered at the same time.

Parting Thoughts

AI is exciting, but it's not without risk. Whether a company is implementing AI initiatives, assessing cybersecurity risks, or evaluating its long-term strategy, AI should be a topic of discussion at the board and management levels. The depth of those discussions will vary, but the considerations outlined above will help companies improve peripheral awareness as they consider adopting AI initiatives or when they periodically reassess those programs and the underlying governance and risk assessment frameworks.

Share

Author

Table of Contents