Artificial intelligence ("AI") could significantly alter the landscape for investment advisers, bringing with it the potential to quickly develop personalized investment advice using large amounts of unstructured data, provide automated regulatory compliance processes, and simplify communication about investment performance with investors. The implementation of AI could also introduce high-risk issues for registered investment advisers, including with respect to their fiduciary duties to investors and confidentiality concerns. The SEC is paying close attention to the use of AI by RIAs, having proposed a new rule in July 2023 to address conflicts of interest raised by predictive analytics and having conducted a sweep of RIAs in August 2023 to learn more about current uses of AI by private fund advisers. As RIAs explore the use of AI to enhance their investment advisory services, they should be mindful of the potential legal, regulatory, and practical risks presented by these tools.

In July 2023, the SEC proposed new rules under both the Securities Exchange Act of 1934 and the Investment Advisers Act of 1940 aimed at eliminating conflicts of interest associated with "predictive data analytics" (the "Proposed Conflicts Rules"). The Proposed Conflicts Rules define predictive data analytics to include technology that uses algorithms and similar methods or processes to "direct investment-related behaviors or outcomes of an investor." Accordingly, if adopted as proposed, the Proposed Conflicts Rules would cover the use of AI systems by RIAs to aid in providing investment advice to investors. Shortly after issuing the Proposed Conflicts Rules, the SEC launched a probe to investigate how private fund advisers use AI, indicating the SEC's efforts to monitor and control the use of AI.

In this note we provide an overview of ways AI may be used by investment advisers, highlight important aspects of the Proposed Conflicts Rules, and outline risks posed by the use of AI by investment advisers.

Use of AI for Investment Advice by Investment Advisers

As a general matter, AI is capable of deriving value from large amounts of unstructured data through the organization and arrangement of the data. Generative AI refers to the process whereby AI systems create content and recommendations. Over time, AI systems learn and improve through continuous training and user input enabling the technology to engage in natural language processing, manipulate digital images, and make informed decisions. As AI learns adaptively through user interaction, it can analyze data patterns and tailor responses or recommendations to the user. Although AI requires a certain amount of human input, once trained in a particular field, it may or may not involve direct human interaction or oversight.

Similar to AI, robo-advisers use algorithms to automatically make investment decisions or recommendations with little human oversight or supervision. However, robo-advisers that implement pure algorithms to generate investment advice differ from AI in that there is no machine learning, and thus the outputs may be less sophisticated and adaptable to change. Recently, some firms that offer robo-adviser technology have been implementing AI technology, including machine learning systems, to enhance their offerings to investors and to take advantage of the efficiencies that AI can provide. These include the use of automated features to save time on administrative tasks, to accurately apply investor goals and preferences to an investment strategy, and to develop targeted marketing to attract more investors. The services offered by AI and the benefits and risks that accompany it may evolve over time as the technology improves.

Regulatory Perspective: Proposed Rules on Conflicts of Interest Associated with the use of Predictive Data Analytics

Under the Proposed Conflicts Rules, "predictive data analytics," or covered technology, includes AI, machine learning, or deep learning algorithms, and large language models (including generative pretrained transformers, or GPT). In August 2023, the SEC launched a sweep of RIAs to gather information to determine how private fund advisers currently use AI. As part of this probe, the SEC asked private fund advisers a series of questions about how they are managing risk involving AI, including a review of policies and procedures, contingency plans, supervision plans, and security and validation measures surrounding AI.

Shortly before the Proposed Conflicts Rules were announced, SEC Chair Gary Gensler warned that "the next financial crisis could emerge from firms' use of [AI]" due to an overreliance and proliferation of unregulated AI systems. Chair Gensler's concerns primarily related to conflicts that arise when RIAs' use of AI places the firm's interests ahead of those of the investors. In the Proposed Conflicts Rules, it is clear that the SEC is focused on firms' use of investor data to analyze and produce outputs that may or may not ultimately benefit the investor.

If adopted as proposed, the Proposed Conflicts Rules would require RIAs who use or reasonably anticipate using covered technology to:

  • Evaluate any use or reasonably foreseeable potential use of covered technology by the firm to be able to identify conflicts of interest that may arise;
  • Determine whether the conflicts will place the firm's interests ahead of the interests of investors; and
  • Eliminate or neutralize the effect of those conflicts.

Further, if adopted as proposed, the Proposed Conflicts Rules would require RIAs to adopt and implement written policies and procedures reasonably designed to prevent violations. The written policies and procedures would be required to include:

  • A written description of the process for evaluating any use or reasonably foreseeable potential use of covered technology;
  • A written description of the process for determining whether any conflict of interest will place the firm's interests ahead of the interests of investors;
  • A written description of the process for determining how to eliminate or neutralize the effect of the conflicts; and
  • A review of the adequacy of relevant policies and procedures and the effectiveness of their implementation conducted at least annually and documented in writing.

While the Proposed Conflicts Rules focus on conflicts of interest, they are an early indication that the SEC is likely preparing to ramp up AI regulation more broadly in response to the rapidly expanding use of AI and robo-adviser technology. RIAs looking to use AI technology as a tool should prepare for increased regulation. Consequently, RIAs should consider how they will address certain risks associated with their use of AI technology.

Potential Lessons About AI from Guidance on Robo-Advisers

Since 2017, the SEC has listed the "oversight of computer program algorithms that generate recommendations, marketing materials, investor data protection, and disclosure of conflicts of interest" as an exam priority due to the rapid growth of robo-adviser services. The SEC has since pursued multiple enforcement actions against robo-advisers, charging them with misleading clients, making material misstatements and omissions regarding their automated services, and providing false and misleading statements about their performance. The most recent SEC guidance on robo-advisers, issued in 2017, raised unique issues associated with the use of computer algorithms and limited human interaction.

These actions reflect the SEC's continued focus on the fiduciary duty that registered RIAs owe to their clients. While the current SEC guidance does not specifically address concerns and risks associated with the use of AI technology, the SEC's focus on the transparency, accountability, and appropriateness of investment advice suggests that similar principles may be extended to AI-driven investment advising. Given the SEC's continued efforts to regulate AI, RIAs should consider the regulatory implications of AI use.

The SEC guidance on robo-advisers also emphasizes transparency and requires effective disclosure of the business model and scope of services. For AI-driven investment advising, disclosure of the underlying logic and data considered in generating investment advice could be substantially more difficult for investment advisers. Unlike robo-advisers, AI systems process large amounts of structured and unstructured data to deliver sophisticated and personalized investment solutions. Consequently, explaining or comprehending the specific factors behind an AI system's processes and conclusions could pose significant challenges to investment advisers and require more fulsome disclosures than current disclosures for robo-advisers.

In light of the SEC guidance and general best practices and due to the potential for error, RIAs employing AI tools should implement human oversight processes. The SEC guidance on robo-advisers addresses risks associated with limited human involvement and oversight, such as algorithmic errors and malfunctions. Unlike robo-advisers, investment advising through AI can involve varying levels of human interaction depending on how the AI system is used. Investment advisers can use AI tools to augment their decision-making, gain insight from AI-generated data analysis, or decide to automate certain limited tasks. This human element could allow for a more nuanced and complex work product than what robo-advisers can offer. However, this flexibility could also increase uncertainty about how much of the investment advising process is executed under human supervision. Human involvement and supervision may be maintained with regular system monitoring, compliance reviews, and ongoing training and qualification of employees to enhance accountability and ensure that investment recommendations align with investors' interests.

Additionally, SEC guidance cautions robo-advisers to ensure the sufficiency and clarity of their online questionnaires in soliciting relevant client information and to address possible inconsistent responses. Building on these guidelines, it is possible that the SEC will look for similar clarity on the information gathering processes that enable an AI system to produce appropriate investment advice.

Additional Risks Posed by the Use of AI by Investment Advisers

Fiduciary Risks

RIAs should be cognizant of the potential implications that using AI may have for compliance with the fiduciary duties that they owe clients. The complexity and opacity of AI algorithms may make it difficult for RIAs to fully disclose the rationale behind recommendations to investors. Further, as AI systems learn through large amounts of input data from humans, their algorithms can inadvertently introduce biases and make disproportionate or unreasonable investment recommendations that are not appropriate for investors. The duty to provide appropriate and well-researched recommendations may also be impacted, as AI-generated investment advice may be flawed given the questionable reliability and accuracy of AI-generated advice. Accordingly, RIAs should approach the landscape of AI with caution and diligence to ensure they continue to uphold their fiduciary duties to their clients.

Confidentiality

AI systems are designed to continually learn and improve through the input data they receive, resulting in more tailored and accurate results over time. To train AI models effectively, diverse and comprehensive datasets are necessary. Increased reliance on AI tools introduces certain confidentiality risks that RIAs should be mindful of when implementing AI systems:

  • Third-party data sharing: The use of AI tools may involve the sharing of client personal data with third-party service providers. RIAs should thoroughly review the privacy and confidentiality practices of these third-party service providers to ensure that client data remains adequately protected.

  • Inadequate data anonymization: RIAs should ensure the implementation of proper data anonymization and/or aggregation processes to protect client confidentiality. This helps prevent the identification or exposure of sensitive client information during the AI system's data processing.

  • Insider information: AI algorithms used in investment advising should be designed and monitored to ensure compliance with regulations that prohibit the use of material non-public information in investment decisionmaking. RIAs may consider using preventative measures to avoid inadvertent incorporation of confidential or privileged information into the AI system.

  • Insider threat: RIAs should take measures to prevent employees or individuals with access to the AI systems from intentionally or inadvertently misusing or disclosing sensitive client information. Among other things, RIAs should consider the use of access controls, employee training on data protection, and ongoing monitoring to mitigate risks associated with insider threats.

Regulatory Filing Risks

Generative AI could conceivably be used by RIAs to automate completion of required regulatory forms, like Form ADV and annual amendments to Form ADV, potentially shortening the time spent on preparing such filings. Besides improved efficiencies, the use of generative AI in this context may also prevent typos or small human errors that could complicate the filing of Form ADV. Nevertheless, without human oversight, RIAs relying solely on generative AI to complete required regulatory forms can produce Form ADVs that contain inaccurate information or omit important information. As a result, even as generative AI systems improve and become more precise, it is critical for RIAs using generative AI in this context to implement systems of human oversight to mitigate the risk of filing inaccurate regulatory forms or missing information that could delay the process of registration.

Marketing Rule Risks

The SEC's Marketing Rule prohibits RIAs from making any untrue statement of a material fact or including in an advertisement a material statement that the RIA has no reasonable basis for believing it will be able to substantiate. The use of generative AI to produce marketing materials without adequate human oversight or review could lead to misleading statements or inaccurate representations being made, especially as generative AI has been known to produce inaccurate or false information. Consequently, when using generative AI to produce material directed at influencing investors, RIAs should incorporate human oversight and carefully review and determine whether AI produced material is accurate and not misleading.

Conclusion

AI has the potential to significantly reshape the investment adviser landscape. As RIAs think about adopting AI tools, they may want to consider how their use of technology will intersect with their policies and procedures, to keep up to date with new developments in the rapidly changing regulatory and technological spheres, and to thoughtfully consider how to use this new technology to maximize their efficiency while minimizing their regulatory risk.

We are also grateful to Bhavishya Barbhaya, Haanbee Choi, Michal Folczyk, Matthew Gallot-Baker, and Daniel Kim for their contributions to this regulatory update.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.