Introduction
This very second, somewhere in your organization, an employee is experimenting with artificial intelligence (“AI”) to improve their work product and their efficiency. Maybe it’s someone in finance using AI to analyze and summarize complex data sets into a report for the Board of Directors. Maybe it’s someone in sales preparing their monthly or quarterly sales report.
Most managers would appreciate the initiative shown by these employees. However, information submitted to consumer AI tools such as ChatGPT, Claude, or Gemini can train those tools’ large language models. If employees use a consumer‑level AI tool, the information they provide may train that AI’s model with material non‑public information (MNPI) about your company’s performance.

Not All AI is Created Equal
Public companies must distinguish between Enterprise AI and Consumer AI tools:
- Enterprise AI tools (e.g., OpenAI ChatGPT Enterprise, Microsoft Copilot, Google Gemini for Workspace), which often come with:
- SOC 2 compliance
- Data-encryption
- Data privacy – Inputs should not be used to train the AI
- Administrative access controls
- Consumer AI tools (e.g., public versions of ChatGPT, Claude or Gemini), which may:
- Log and use user inputs for model training
- Have unclear or unfavorable data retention and access policies
- Provide no contractual guarantees of data confidentiality

Key Artificial Intelligence Risks:
The concern is not just theoretical. If an employee of a public company uploads MNPI into a consumer-level AI tool, that data might be retained and viewed by the provider of the AI tool. Even more concerning, that MNPI could influence model outputs for outside users. This could amount to an uncontrolled, unintentional disclosure of material information in violation of securities regulations, particularly Regulation Fair Disclosure (“Reg FD”).
In this article, we examine the intersection of artificial intelligence and securities law, explore the risks of selective disclosure of material information, and provide guidance to help public companies navigate this evolving regulatory gray zone.
What Is Selective Disclosure?
Selective disclosure occurs when a public company discloses material information to a limited group (e.g., analysts or institutional investors) before making it available to the general public. This was precisely the issue the U.S. Securities and Exchange Commission (SEC) sought to address with Regulation FD, adopted in 2000.
Key Principles of Regulation FD
- If a company intentionally discloses MNPI to a market participant (e.g., analysts, shareholders, investment advisors), it must simultaneously make that information public.
- If the disclosure is unintentional, the company must promptly make a public disclosure (typically via a press release or Form 8-K).
Legal Reference: 17 C.F.R. § 243.100(a)
According to the SEC’s 2000 adopting release for Regulation FD, “a selective disclosure is not limited to disclosure to a person physically present, but also to one who otherwise gains access to the information.” (SEC Release No. 33-7881) An AI model that logs prompts and incorporates MNPI into its training corpus would likely qualify as a “person” under Reg FD. If the MNPI that is now embedded in the model’s training corpus is used in responses to individuals outside the organization, this would likely constitute “selective disclosure”.
Cited Source:
SEC Final Rule: Selective Disclosure and Insider Trading
SEC Release No. 33-7881; 34-43154
How Can Using AI Tools Trigger Selective Disclosure?
While AI tools do not themselves broadcast information like a press release or rogue employee might, they may become a disclosure vector if used carelessly. Here’s how:
- Prompt Logging Risks: Some AI platforms log prompts for quality assurance or future model improvement.
- Third-Party Model Use: Uploading MNPI to tools that are not private or under contractual confidentiality (e.g., public APIs) may result in unintentional exposure.
- Team Collaboration Features: Internal AI deployments (e.g., Microsoft Copilot, Notion AI) may allow multiple users to access shared content.
Best Practices and Policy Recommendations
To mitigate risk, companies should treat AI tools with the same caution applied to social media. While it is safe to feed your AI tool with public information, exercise extreme caution with MNPI.
Encourage employees to turn off the training modes of all AI they are interfacing with, even if they are only using that AI for public information:

Below is a sample policy clause you can adapt:
Sample Policy Language: AI Use and Material Information
Employees must not enter, upload, or disclose any material non-public information into publicly accessible consumer-level AI systems (e.g., ChatGPT, Gemini, Claude, Copilot, or any similar tool). Furthermore, employees must not enter, upload, or disclose any material non-public information into any enterprise-level AI system that does not provide:
- SOC 2 compliance
- Data encryption
- Contractual guarantees regarding data confidentiality
- Explicit commitments not to use inputs for training purposes
- IT governance controls aligned with the company’s data protection policy
Even when using enterprise versions of these tools, users must take all steps necessary to ensure that material non-public information is not inadvertently disclosed to unauthorized parties. All use of AI tools must comply with Regulation FD and internal disclosure protocols.
Recommendations
- Train teams on what constitutes MNPI and where AI fits into your disclosure regime.
- Review AI vendor contracts for data usage and confidentiality terms.
- Segment AI usage: Use public-facing tools for public information only.
- Implement disclosure logging to track any interactions involving material information.
- Appoint a cross-functional AI oversight committee that includes Legal, IR, and IT.
Looking Ahead: Regulation and AI in Capital Markets
The SEC has not yet issued specific guidance on AI and Reg FD, but growing attention to AI’s role in publicly traded companies and algorithmic decision-making suggests that oversight is likely to evolve.
For precedent on emerging technologies and securities law, see:
- SEC’s 2023 Cybersecurity Disclosure Rule: Final Rule
- NIST’s 2023 AI Risk Management Framework
Final Thoughts
The rapid maturation of Artificial intelligence will soon force corporate teams, particularly those in finance, communications, and investor relations, to acknowledge that AI will be a defining source of competitive advantage in their company’s competition for capital. By proactively aligning your AI usage with securities laws and Reg FD best practices, your organization can reap the benefits of AI without stumbling into regulatory pitfalls. If you’re struggling with the implementation of digital strategies and artificial intelligence please consider MCI’s Digital and AI Technology Services.