Advisors using AI could take a big step toward regulatory compliance if they began every query proposed to ChatGPT or other large language model with a simple statement like: “I am a fiduciary and an SEC-registered RIA.”
So thinks Beth Haddock, chief legal officer and head of compliance at the wealth management software firm AdvisorEngine. In a talk Wednesday on “AI & Compliance: Navigating the New Frontier” at Financial Planning’s ADVISE AI conference in Las Vegas, Haddock said advisors who use AI could collectively reduce their risk of violating industry regulations if they always started queries by establishing they are fiduciaries obliged to always put their clients first.
That sort of prompt informs the AI that, “You are helping me serve the best interests of my clients.”
“And then you go from there,” Haddock said. “I think what we’ll do is sort of hone all the AI tools, and it takes out the hallucinations. It takes out where they might think you’re a different type of entrepreneur.”
Haddock was joined on stage by Richard Chen, an advisor advocate and the founder of Brightstar Law Group in New York. Both speakers laid out regulatory pitfalls presented by the use of AI in financial planning and steps advisors can take to stay on the right side of the law.
What governs AI use if there’s no specific regulation?
Haddock noted that the Securities and Exchange Commission and many other industry watchdogs now have no specific regulations governing advisors’ use of AI. A previous proposal put forward under President Joe Biden was dropped earlier this year under a general deregulatory push ushered in by President Donald Trump.
But even in the absence of an AI-specific rule, existing regulations put significant limits on how advisors can use the technology. She mentioned not only the fiduciary duty to put clients’ interests first but also rules on cybersecurity and the protection of private client data. There’s also the need to record conversations with clients about investment recommendations and similar matters.
Perhaps most importantly, she said, advisors need to talk to the makers of AI products to try to learn how those technologies “think” and what limitations they may have. Such due diligence should be documented as proof of pains taken should regulators start raising questions.
“We have precedent where the SEC has said, ‘Use whatever tool you want, but make sure you have a white paper or methodology paper,” she said. “This way you can tailor disclosures and you can show your due diligence.”
Haddock acknowledged that AI firms are not always forthcoming with explanations of how large language models and similar technologies produce their results. Advisors, in a sense, will have to train tech providers on what their expectations are.
“As an industry, potentially, the more we ask, the more common it’ll be that methodology papers and white papers are available,” Haddock said.
Four regulatory concerns touching on advisors and AI
Chen said he sees existing regulations raising these four primary concerns for advisors who are using AI:
Data privacy. The SEC already has Regulation S-P, a rule requiring advisors and brokers take extensive steps to prevent their clients’ private data from being released publicly. “Whether it’s AI or another sort of service-provider tool, the core is making sure that there are reasonably aligned policies that can safeguard that information,” Chen said.
Misleading information. Regulators have extensive rules forbidding the distribution of inaccurate or unprovable statements. The SEC’s marketing rule, for instance, requires advisors to be able to furnish evidence to support any message sent to two or more current or potential clients.
“Part of the challenge of that with AI tools is: How do you prevent hallucinations?” Chen said. He said AI systems are known to “hallucinate” — or provide fabricated results — when they haven’t been fed enough data and make assumptions to make their answers appear convincing. That tendency can be countered, in part, by providing the right prompts.
“Say: Don’t guess about what you’re giving me,” Chen said, “and ask: What information do you need that I haven’t given you and that can be helpful? At the same time, tell the tool to show its work.”
Bias. The fiduciary duty to always do what’s best for clients is premised on the idea that advisors know their clients well. But AI systems that have ingested a lot of data about one particular group of people — say, workers on the verge of retirement — may be predisposed to think everyone has the same investing and saving goals.
“There’s also the duty of loyalty to make sure that you’re not providing conflicted advice that’s not at least disclosed to be conflicted, or you know, where there’s conflicts that are not properly managed,” he said.
Record retention. Chen said many firms have adopted systems to track advisors’ communications on emails and messaging apps. That’s not necessarily true with AI.
“One of the challenges, unfortunately, we’ve seen so far is that a lot of advisors use personal Chat GPT accounts, which unfortunately creates a problem with respect to retention at the firm level in terms of what needs to be retained in order to to show this is what was put into the tool and these are the outputs,'” he said.
AI providers’ promises about keeping client data private
From the audience, Andrew Gladhill, the chief compliance officer and director of investments at the Portland, Oregon-based RIA Human Investing, asked the panelists how far anyone can trust AI providers’ assurances about data privacy. Many AI firms, he continued, say they make sure any client data advisors upload is first “anonymized,” or made untraceable back to a particular person, before being used to train or improve their systems.
“How do you think about that in terms of compliance risk?” Gladhill asked.
Chen said it’s exceedingly hard to verify whether firms’ promises about keeping client data private are actually being kept. “At least in the current state where tools are still being refined, I treat it with a healthy amount of skepticism as to whether or not that’s actually true,” he said.
Low- , medium- and high-risk applications of AI
Haddock said advisors should take care to let clients know exactly when they are using AI and for what purposes.
“The regime for registered investment advisors in the U.S. is really based on a disclosure and a transparency concept, so why not use that also for AI?” she said.
Haddock said she lumps advisors’ possible uses of AI into low-risk, medium-risk and high-risk categories. Low-risk activities include relying on it to take notes of discussions with clients, write emails and produce marketing materials that can easily be reviewed for accuracy. But once advisors approach calling on AI to make investment recommendations or provide other information directly to clients, the risks quickly move into the medium or high categories.
One way to mitigate the regulatory concerns is to always give human beings the final say on anything going out to investors. Advisors, for instance, may use AI to produce the sorts of hypothetical recommendations that go before an investment committee for approval, Haddock said.
“The investment committee are humans that can ask lots of good questions,” she said. “Then you can preserve the agenda. You can preserve all the meeting materials and show that you met your fiduciary duties for clients’ best interest or any of the other investment concepts.”

















