New AI risks include AI washing
New AI risk, AI-washing, needs to be considered in organizational risk management strategies

Earlier this year, the U.S. Securities and Exchange Commission began warning publicly traded companies against the risk of “AI-washing” — that is, making misleading statements to investors about how well the company manages its use of artificial intelligence. This ultimately adds to the list of AI risks companies need to keep in mind as part of their risk management strategy.

Brace yourselves, CISOs. This risk could hit closer to home than you realize.

The issue here is that poor management of AI-related risks could lead to personal charges against you, or other executives involved in developing and managing artificial intelligence at your business. Your company not only has an incentive to tackle such risks wisely — you do, too.

What is AI-washing?

The definition of AI washing and what it really means
What AI-washing really means and why companies need to be cautious when speaking about AI

First, let’s unpack how to define AI-washing. The phrase itself is a knock-off of “greenwashing,” which is when a company promotes itself as more environmentally conscious than it actually is. Greenwashing (and enforcement against it) has been around for the better part of a decade. Now that artificial intelligence is coming to the fore, worries about AI risks and AI-washing are following a similar path.

So far, the SEC has charged only two companies for AI-washing. Both were online investment advisory firms that said they use nifty AI algorithms to make trading recommendations to customers; in reality, neither firm used AI at all. Both paid six-figure fines.

A more ominous warning, however — one that applies to public companies in any sector — came from a speech delivered in April by Gurbir Grewal, the head of the SEC Enforcement Division: 

“There are any number of reasons that a public company may disclose AI-related information,” Grewal says. “It may be in the business of developing AI applications. It may use AI capabilities in its own operations to increase efficiency and value for shareholders, or it may discuss security risks or competitive risks from AI. But irrespective of the context, if you’re speaking on AI, you too must ensure that you do so in a manner that is not materially false or misleading.”

If you’re speaking on AI, you too must ensure that you do so in a manner that is not materially false or misleading.”

Gurbir Grewal, Head of the SEC Enforcement Division

From cybersecurity risk disclosures to personal liability

AI washing goes beyond risk disclosures and can result in personal liability
Misleading cybersecurity disclosures can lead to personal charges from the SEC

How do we get from that broad threat to personal charges filed against a CISO? One good example comes from IT services SolarWinds. Last year the SEC sued both SolarWinds as a corporation and its CISO personally for making misleading disclosures about the company’s cybersecurity risks.

In that case, the SEC pointed to a “cybersecurity statement” that SolarWinds had published for years promising investors and the public that the company embraced high standards of security and software development. 

According to the SEC, the reality at SolarWinds fell far short of those promises, as demonstrated by the devastating attack SolarWinds suffered in 2020 at the hands of Russian-sponsored hackers. Since the CISO was aware of those shortcomings but didn’t raise protests about the company’s disclosures, the SEC argues, he is also in the liability hot seat. (SolarWinds denies the SEC allegations and has vowed to fight the SEC in court.)

Now back to Grewal and his warnings in April about AI-washing:

“I would look to our approach to cybersecurity disclosure failures generally: we look at what a person actually knew or should have known; what the person actually did or did not do; and how that measures up to the standards of our statutes, rules, and regulations.”

Gurbir Grewal, Head of the SEC Enforcement Division

It’s not hard to see how all that might apply to artificial intelligence disclosures, too.

For example, a company might publish some “declaration of ethical AI principles,” in an SEC filing or some other corporate statement, touting all the usual goals of security, privacy, inclusiveness (read: anti-discrimination), transparency, and the like. On the inside, however, employees complain that data validation hasn’t been done, or that testing results are being ignored, or that any number of other AI internal controls aren’t working as promised. 

That mismatch between public disclosure and internal reality creates the misleading disclosure risk that can lead to an SEC probe. 

CISOs and other compliance professionals at non-public companies who might think this risk doesn’t apply — not so fast! The SEC’s line of thinking is similar to how the Federal Trade Commission (FTC) approaches data protection, and the FTC can take action against both public and private companies. 

What all companies need is a keen understanding of how they’re governing artificial intelligence and what internal control challenges you encounter along the way.

The 3 fundamentals of AI oversight

The three fundamentals of a strong risk and compliance management program

If CISOs (or internal auditors, compliance officers, and others trying to manage AI risk) want to stay ahead of this threat, your compliance program will need several capabilities.

1. Policy management tools

First are strong policy management tools, to draft the policies you’ll need. For example, you might want a tool that includes template policies for data validation, testing, disclosure to consumers, and so forth. You’d also want a tool that can help you map policy requirements that might be contained in various regulations or risk management frameworks you use, such as NIST’s AI framework or ISO standards.

2. Internal controls

Second are a strong set of internal controls, plus a single repository to preserve the results of those controls. For example, you’ll want controls to assure that all data feeding into your AI has been validated and that none of it might violate data privacy regulations. Those controls need to be documented. The results of any control testing should be preserved too.

3. A security-first culture

The third important capability is a bit more intangible: you need a strong culture that encourages others to speak out. This means that when junior employees see something amiss with your AI efforts, they feel comfortable escalating those concerns.

Moreover, when more senior executives (read: you) receive troubling information about your AI risks, you’ll need to feel comfortable speaking up to management or the board, too: “This isn’t working. We need to disclose this issue rather than run the risk of misleading people either by deliberate statements or by omission.” 

Most fundamentally, your senior leadership and the board need to support a strong culture of talking about and confronting risks, rather than brushing them under the rug — which has been the secret to corporate success since long before AI ever came along.

Monthly Newsletter

Get the Latest on Compliance Operations.
Subscribe to Hyperproof Newsletter