Given that the pace of technological change is often swift, regulatory bodies often struggle to keep regulations up to date amidst a rapidly changing landscape. In the past couple years, the rapid increase in investment adviser use of Artificial Intelligence (AI)-powered tools has presented a challenge to regulators in attempting to ensure (among other priorities) that client data remains secure while allowing advisers to use this technology to offer better client service. Which has left many open questions as to advisers’ responsibilities under relevant regulations when it comes to the use of AI.
In this guest post, Chris Stanley, founder of Beach Street Legal LLC, discusses how the Securities and Exchange Commission (SEC) appears to be viewing AI, how advisers can apply the existing regulatory framework to the use of this technology, including for research, marketing, client meeting note-taking, and portfolio management.
While the SEC under previous Chair Gary Gensler in 2023 proposed a variety of new rules and rule amendments that would have regulated investment advisers’ and broker-dealers’ use of technologies that “optimize for, predict, guide, forecast, or direct investment-related behaviors or outcomes” (likely intended to target the use of AI without naming it explicitly), these were withdrawn earlier this year, leaving advisers to look to the existing regulatory framework (e.g., the Advisers Act, the rules thereunder, and Regulation S-P) as well as statements made by SEC officials for guidance when it comes to using AI tools appropriately.
The concept of ‘trust but verify’ is applicable in several areas when it comes to adviser use of AI. For instance, advisers using AI tools for conducting research will likely want to verify the accuracy of AI-generated output (as these tools continue to experience hallucinations and misinterpretations). Similarly, advisers using AI in marketing (or touting their use of AI in marketing materials) will want to be aware of both the SEC’s “Marketing Rule” and the Advisers Act’s anti-fraud prohibitions (as the SEC has issued enforcement actions related to “AI Washing” [i.e., making false claims about an adviser’s use of AI]). Additionally, recordkeeping, participant consent, and client privacy and information sharing requirements under the Advisers Act’s “Recordkeeping Rule” will be relevant for advisers who use AI-powered notetaking tools.
In this environment, advisers can consider acting proactively to remain in compliance with current regulations and put themselves on good footing for potential changes to the regulatory environment surrounding AI. Such steps, among others, could include surveying staff to understand the firm’s current use of AI tools, determining which AI tools and use cases will be permitted (and which ones will not), conducting due diligence on AI tools being used, as well as training and testing staff on these policies.
Ultimately, the key point is that because regulation will invariably lag behind the rapid pace of AI innovation, advisers will, for the moment, have to conform their AI practices as best they can under the existing regulatory framework. Which could allow advisers to take advantage of the capabilities that AI tools provide while maintaining their fiduciary duty to their clients.
Read More…



















