Blog

AI Agents: Automating Success or Fast-Tracking Corporate Liability?

AI agents are the next evolution of generative AI, having the potential to reshape how companies operate and innovate. For companies considering implementing AI agents or already using them, it’s important to go in eyes wide open. In this week’s blog, my colleague Lenin Lopez discusses what AI agents are, the accompanying benefits and risks, as well as practical considerations and red flags. —Priya Huskins 

Autonomous agents, agentic artificial intelligence (AI), or AI agents seem to be all the rage lately. Most recently, Salesforce CEO Marc Benioff was quoted as saying that today’s CEOs will be the last to lead all-human workforces and that “[w]e are really moving into a world now of managing humans and agents together.” 

Brain controlling a circuit board, symbolizing AI

Salesforce has already implemented AI agents, and it isn’t alone. For directors and management teams whose companies are in the same AI agent camp as Salesforce or are considering moving in that direction, there are certain considerations worth keeping in mind to limit risks and avoid becoming the subject of the next AI-related lawsuit.

This article will: 

  • Briefly explain what AI agents are. 
  • Describe some of the risks associated with the use of AI agents, as well as associated litigation and enforcement trends. 
  • Share a few practical considerations and red flags for directors and management teams to consider. 

AI Agents: An Overview 

AI agents are powered by generative AI. As a reminder, generative AI can create content (e.g., text, images, audio, or video) when prompted. Think ChatGPT, Claude, and DeepSeek. At their core, these systems create responses using algorithms that are trained on information that is often found on the internet. Of course, generative AI systems lack human judgment. 

Send in the AI agents. AI agents are effectively machine-based applications that can achieve specific goals by using complex decision-making without the need for human involvement. 

ServiceNow, a software company, has deployed AI agents across its platform. It describes agentic AI as follows
“Agentic AI, the next evolution of GenAI, involves AI agents that act and interact in smart and autonomous ways with humans providing oversight and guardrails. With agentic AI, humans can be supported by multiple AI agents trained to perform specific tasks, rather than, for example, a single AI assistant or chatbot relying on a human’s specific prompts or queries. Agentic AI is available to our customers as a Now Assist feature, where they can easily create agentic skills tailored to their unique needs. AI agents can use these skills to work together with humans to help augment and accelerate workflow outcomes by performing and completing actions on the human’s behalf.” 
Salesforce dedicates a section on its website to discuss agentic AI. Here is the description: 
“Agentic AI is the technology that powers AI agents so they can act autonomously without human oversight. By serving as a comprehensive platform, agentic AI facilitates seamless interaction between AI agents and humans, fostering a collaborative environment where both can work together. This platform has a suite of tools and services to help AI agents learn, adapt, and collaborate so they can quickly handle complex and dynamic tasks. It’s the next frontier of AI known for its ability to operate independently by making decisions, adapting to dynamic situations, setting goals, and reasoning.” 
And here is a hypothetical use case offered by McKinsey that may help in visualizing how AI agents may work in the real world: 

Loan Underwriting – AI Agents Assisting in Preparing Credit-Risk Memos: Financial institutions use credit-risk memos to assess the risks of extending credit or a loan to a borrower. This process typically requires financial institutions to invest a significant amount of time in collecting information and performing analyses, which culminates in a final risk memo for credit decision and underwriting purposes. McKinsey notes this “tends to be a time-consuming and highly collaborative effort, requiring a relationship manager to work with the borrower, stakeholders, and credit analysts to conduct specialized analyses, which are then submitted to a credit manager for review and additional expertise.” 

According to McKinsey, one possible solution would be to create multiple AI agents responsible for specific tasks, like extracting information, calculating ratios, and summarizing information. Each specialized AI agent would then report to a single AI agent that would generate a draft credit memo, which would be delivered to a human portfolio manager.  

In addition to ServiceNow and Salesforce, SAP, Oracle, Workday, and Microsoft are just a few other companies rolling out AI agents. For additional details, see this article from CFO Dive.

For those interested in a deep dive into multiple AI agents and how they process information, see this discussion from IBM

Risks Related to AI Agents 

As we discussed previously, using AI poses risks. Those risks include the lack of transparency in underlying assumptions that AI models are built on, the introduction of bias and discrimination into decision-making, privacy violations, social manipulation, and the next financial crisis. 

AI agents compound those risks and create additional ones as a function of the following: 

  • More Data Means More Complexity and Risk. Generative AI underlying what powers AI agents typically relies on massive data pools, often sourced from multiple providers, internal databases, and real-time user interactions. As AI agents communicate with one another, the complexity of this information flow increases exponentially.
  • AI Agents Increase Cybersecurity Risks. Deploying AI agents across a company’s network significantly expands the points of access for bad actors to exploit. As AI agents are meant to operate autonomously, there is a risk that humans may not be in the loop at every point of potential failure. See this article for a discussion of some of the unique cybersecurity risks associated with AI agents. 

Risks are easy to identify; quantification is another story. Companies have a platter of risks to contend with, so, how should they think about risks associated with AI and, specifically, AI agents? Without clear AI regulations or guidance on how courts assess liability, companies may struggle to determine how much time, cost, and other resources they need for AI-related risk management. 

Trends in AI litigation may shed some light or perhaps serve as the equivalent of a “stop, proceed with caution” sign. 

AI Litigation and Regulation Trends

Since I last discussed AI litigation trends, the types of claims being made haven’t changed significantly. For instance, IP infringement, privacy and data security violations, discrimination, product liability, and fraud remain common claims. However, there are just more of them. For example, securities class action claims relating to AI more than doubled, from seven in 2023 to 15 in 2024.

Then you have lawsuits like one involving UnitedHealth. In UnitedHealth’s case, a subsidiary, NaviHealth, allegedly relied on algorithm-based technology to prematurely end nursing home care for certain patients. Cases like this have attracted a significant amount of attention, not just from the plaintiffs’ bar, but also from legislators, both at the federal and state levels. For more information regarding laws and regulations relating to AI, see this resource from The International Association of Privacy Professionals

With the number of companies leveraging AI continuing to rise, more AI-related lawsuits will likely follow. When it comes to the regulatory environment, we should expect the patchwork of federal, state, and foreign regulatory frameworks to get even more complicated. 

It’s worth noting that even if the federal government decides to ease off on AI regulation and enforcement, states and foreign governments show no signs of slowing down. 

Practical Considerations and Red Flags 

While AI agents offer several potential benefits to a company’s bottom line, they also present several risks, which can expose companies to private litigation and regulatory investigations. 

If your company is currently employing or considering using AI agents, the following are a few key questions directors may want to ask management, as well as red flags to watch for. 

  1. What specific roles and functions will AI agents perform? 

    Why this question matters: As discussed above, unlike traditional forms of AI, which analyze data and/or automate repetitive tasks, AI agents can act independently, make decisions, and act without human intervention. Confirm that management has ensured that AI agent roles are well-defined, aligned with business needs, and don’t introduce unnecessary complexity or risk. 

    🚩 If AI agents are being considered or deployed without a clear, strategic use case—like automating financial modeling, customer service, or cybersecurity monitoring—it may lead to inefficiencies, operational confusion, or unintended consequences. Further, if the rationale for AI agents can’t be articulated by management, it’s reason to hit the pause button. 

  2. How will we ensure that our AI agents remain aligned with the objectives we set? 

    Why this question matters: AI agents operate autonomously, but they should stay within clearly defined guardrails to avoid unauthorized actions or unintended behavior. It’s important to confirm that those guardrails are revisited from time to time. While it’s presumably too soon for AI agents to run million-dollar transactions, AI has arguably already made life-and-death decisions—refer to that UnitedHealth case discussed above. Ultimately, companies should establish rules, oversight mechanisms, and human intervention points to keep AI decision-making in check. 

    🚩 If there are no clear constraints on your AI agent’s authority—like financial limits or compliance triggers that require human involvement—there is a serious risk of AI making unauthorized decisions, like approving transactions or altering business processes without proper review.  

  3. What are the risks of AI agents acting unpredictably, and how do we mitigate them? 

    Why this question matters: Since AI agents learn and adapt, they may behave in unexpected ways, leading to errors, regulatory breaches, or lawsuits. Directors would be wise to ensure the company has a risk management strategy that includes continuous monitoring, auditing, and remediation mechanisms. 

    🚩 If the company lacks a way to audit its AI agents’ actions or detect unintended behaviors in real time, it could result in a parade of horribles. In addition, a lack of a “kill switch” or human override mechanism is a serious red flag. 

  4. How do we ensure AI agents are transparent and accountable? 

    Why this question matters: Save for the individuals within the organization that developed the AI agents, AI agents may be described by most as operating in black boxes. This may become an issue when companies must respond promptly to regulators, shareholders, and customers who demand transparency in AI agent decision-making—especially in sensitive areas like hiring, healthcare, lending, and governance. You will want to ensure that individuals who may have to respond to these types of inquiries (e.g., executive leadership, investor relations, legal, etc.) have a general understanding of how your AI agents work. There also needs to be an easy way to pull this information should your processes become the subject of a regulatory investigation or litigation. 

    🚩 If the company cannot clearly explain how its AI agents make decisions, it risks increased regulatory scrutiny. In the case of a lawsuit, you are looking at a court potentially granting a plaintiff’s broad discovery requests. If there is no clear accountability structure for AI-driven outcomes, directors should question whether the system is ready for deployment. 

 

  1. How will AI agents impact employees, and what is our workforce transition strategy? 

    Why this question matters: As companies invest in AI, layoffs seem to be an inevitable truth. From a workforce perspective, this can raise concerns about job security and morale, as well as negatively impacting company culture. In that spirit, directors should ensure the company has thought through the impact on human capital. This can include plans for employee retention, severance, and upskilling. 

    🚩 If adopting AI agents is done without consideration for workforce impacts, the company may face backlash from employees, unions, and the public. Companies that don't have a clear strategy for employee retention risk damaging morale and increasing turnover. 

Parting Thoughts

AI agents bring with them significant opportunities and risks. It’s important to remember that the risks associated with AI agents are effectively compounded versions of what we have already seen in other AI solutions. With the questions and red flags discussed above in mind, directors and management will be well-positioned to make better decisions when evaluating whether to use or continue to use AI agents. 

Share

Author

Table of Contents