by Joshua Schiffman

The Risks of Using Public AI Models for Proprietary Boardroom Content: Why Security Standards Matter

A board member sits in an airport lounge, reviewing materials for tomorrow’s meeting. The financial projections are complex, the competitive analysis dense. She opens a familiar AI tool—one she uses daily for work—and pastes in sections of the board materials, asking for a summary of the key points.

It’s a completely understandable impulse. Public AI tools have become indispensable for millions of professionals. They’re powerful, accessible, and remarkably good at distilling complex information. But when those tools are used with confidential board materials, they create risks that many directors may not fully understand.

The Data Training Problem

The most significant risk with public AI models is what happens to the information you share with them. While major providers have implemented policies around data usage, the fundamental architecture of these systems means that data you input may be used to train and improve their models—unless you’ve specifically opted out or are using an enterprise version with appropriate protections.

For board members, this creates a troubling scenario. That strategic acquisition plan you asked the AI to summarize? The confidential financial data you wanted clarified? All of that information could potentially become part of the AI’s training data, theoretically accessible in some form to future users. Even if the probability is low, for board members with fiduciary responsibilities to protect confidential information, “low probability” isn’t the same as “no risk.”

The Exposure Chain

The risks extend beyond training data. When you upload documents to a public AI service, you’re creating an exposure chain that includes storage on servers you don’t control, processing through systems you don’t manage, and potential accessibility to company employees under certain circumstances.

If your account is compromised through phishing or other means, an attacker could potentially access your conversation history, including any board materials you’ve shared. Many AI tools integrate with other services or allow plugins—each integration point is a potential vulnerability. Some platforms include features for sharing conversations, making it surprisingly easy to accidentally share confidential information.

The Compliance and Legal Dimension

Beyond technical security risks, using public AI tools with board materials can create compliance complications. Many organizations have clear policies prohibiting the sharing of confidential information with unauthorized third parties—and public AI services, despite their utility, are third parties.

For public companies, there are additional securities law considerations. Material nonpublic information shared through public AI tools could potentially create exposure risks. For directors serving on multiple boards, using the same AI tool for multiple organizations means confidential information from different companies may reside in the same account, creating potential conflicts of interest.

Board-Specific Risks

Board members face unique security challenges that make the risks of public AI tools particularly acute. Board materials often contain the most sensitive information a company has—strategic plans, M&A discussions, executive compensation details, financial projections. This isn’t routine business information; it’s information that, if leaked, could significantly harm the company.

Directors have fiduciary duties to protect confidential information. A security breach resulting from a director’s use of public AI tools could create personal liability concerns. Beyond legal liability, there’s reputational risk—a director known to have mishandled confidential information may find it difficult to secure future board positions.

Perhaps most importantly, management teams need to trust that board members will handle sensitive information appropriately. If executives discover that directors are sharing confidential materials with public AI tools, that trust erodes quickly.

What Proper Security Standards Look Like

Understanding these risks makes clear what proper security standards for AI-assisted board work must include:

Purpose-Built Architecture: AI tools designed for board use should be built from the ground up with security as a core requirement. This means systems where data isn’t used for training, where information is encrypted at rest and in transit, and where access controls are granular and robust.

Data Isolation: Board materials for different organizations must remain completely isolated from each other. There should be no possibility of information from one board position being accessible in another context.

Audit Trails: Proper governance requires knowing who accessed what information and when. Security standards should include comprehensive logging and audit capabilities.

Compliance by Design: The system should be designed to comply with relevant regulatory requirements without requiring directors to become compliance experts.

The Aureclar Approach

This is why Aureclar was built as a purpose-built platform for board governance rather than an adaptation of consumer AI tools. Security isn’t a feature—it’s foundational to the entire system architecture.

Directors using Aureclar can prepare thoroughly with AI assistance while maintaining the security standards their fiduciary duties require. Information shared with Aureclar stays isolated within secure environments specifically designed for sensitive board materials. There’s no training on user data, no cross-contamination between different board positions, no exposure to broader systems that could create vulnerabilities.

Making the Right Choice

The convenience of public AI tools is undeniable. But for board members, convenience can’t override security responsibilities. The same fiduciary duty that requires directors to carefully review financial statements also requires them to protect confidential information with appropriate safeguards.

Using public AI tools with board materials isn’t just a technical security risk—it’s a governance failure. The good news is that directors don’t have to choose between AI capabilities and security. Purpose-built platforms like Aureclar provide both—the analytical power that makes AI valuable and the security standards that fiduciary duties demand.

Key Security Risks

  1. Public AI tools may use input data for model training, potentially exposing proprietary information to future users.
  2. Confidential materials uploaded to public platforms create exposure chains through storage, processing, and potential account compromise.
  3. Using public AI tools with board materials can violate organizational policies, create legal liability, and erode the trust essential for effective governance.

Essential Security Standards

  1. Purpose-built systems where data isn’t used for training and information remains isolated within secure, board-specific environments.
  2. Enterprise-grade infrastructure with encryption, audit trails, and comprehensive access controls designed for the most sensitive information.
  3. Compliance by design that enables directors to fulfill fiduciary duties with confidence in their tools’ security.

Tags:

security ai-models boardroom-content proprietary data-protection fiduciary-duty aureclar

Ready to Transform Your Board Governance?

Join the growing number of boards that are leveraging AI-powered insights to make better, more informed decisions.