Investment committees and executive sponsors increasingly approve AI projects with a value case that ignores the data risk. When these projects produce incidents, the post-mortem usually surfaces the same finding: data-level questions were not asked before funding.
Key Takeaways
- 63% of organizations lacked AI governance policies at the time of their breach
- 97% of AI-related breaches lacked proper AI access controls
- 53% of M&A deals encounter critical cybersecurity issues that jeopardize the transaction
- Only 37% of organizations have AI governance policies in place
Do We Know Where the Sensitive Data AI Will Touch Lives?
This is the first question, and the answer is either yes, with a document to show, or no. There is no third option. If the project team cannot produce a data map or lineage diagram for the data their AI use case will process, the project is being approved without the information required to assess it. For how data discovery is the prerequisite for any AI deployment, this question is Step 0.
The follow-on question for management is: what is our plan to produce this before integration. If the plan is "during implementation," the funding decision has been made without evidence.
Which AI Use Cases Intersect with Regulated or Client Data?
Every AI project in a regulated industry (financial services, healthcare, telecommunications, energy) needs a direct answer. For any use case that touches regulated data, four sub-questions apply:
| Sub-Question | What It Surfaces |
|---|---|
| What regulatory regime applies to this data? | GDPR, HIPAA, SOX, NIS2, sector-specific rules |
| What contractual commitments govern its use? | Customer contracts, vendor agreements, data processing agreements |
| Where is the data physically processed? | Data residency obligations, cross-border transfer mechanisms |
| What audit trail demonstrates compliance? | Logging, access records, processing documentation |
If any of these cannot be answered before funding, the project's timeline needs to include the work to answer them, and the committee should budget for it explicitly rather than assume it disappears into implementation. For how four EU frameworks converge on these data obligations, the regulatory mapping is increasingly complex.
What Contractual and Regulatory Exposures Do We Create If Agents Misbehave?
AI agents take actions. Some of those actions, when they go wrong, will produce contractual liability. A sales agent that sends messages to customers is operating under the company's brand and commitments. A procurement agent that places orders is entering contracts. A support agent that answers questions is making factual claims the customer may rely on.
The committee question is not whether these things will go wrong. They will. The question is what the exposure looks like when they do. Specifically: what is the maximum contractual liability per incident, what is the regulatory exposure under consumer protection or financial services rules, what is the reputational exposure, and what is the insurance position. For how deal teams should evaluate these same risks in acquisition targets, the questions apply with equal force to internal AI investments.
If the answer is "we have not thought about this," the project should not be funded until it has been.
What the Committee Does With the Answers
The goal is not to block AI investment. It is to fund AI investment with the same rigor applied to any capital allocation decision. A project with clear data, regulatory, and contractual answers is ready to be funded. A project without them is a request to approve a plan, not a decision.
Applied consistently, this discipline produces a portfolio of AI investments where the committee can tell the board, quarter over quarter, what the aggregate exposure is. That is the artifact the board should be asking for.
The AI Investment Committee Checklist includes the regulatory exposure matrix by industry and the data-readiness scoring template.
Get the AI Investment Data Checklist
Innovaiden works with leadership teams deploying AI agents across their organizations, from initial setup and training to security framework alignment and governance readiness. Reach out to discuss how we can help your team.
Get in TouchFrequently Asked Questions
Why should data questions precede AI funding decisions?
When AI projects produce incidents, the post-mortem almost always surfaces the same finding: data-level questions were not asked before funding. 63% of organizations lacked AI governance policies at the time of their breach, and 97% of AI-related breaches lacked proper access controls. The data risk was present before the AI project started.
What is the first question an investment committee should ask about any AI project?
Do we know where the sensitive data this AI will touch lives? The answer is either yes, with a document to show, or no. If the project team cannot produce a data map or lineage diagram for the data their AI use case will process, the project is being approved without the information required to assess it.
What contractual exposures do AI agents create when they misbehave?
A sales agent that sends messages operates under the company's brand. A procurement agent that places orders enters contracts. A support agent that answers questions makes factual claims customers may rely on. The committee question is not whether these things will go wrong (they will) but what the maximum contractual liability, regulatory exposure, reputational cost, and insurance position look like when they do.
How should committees use the answers to these data questions?
A project with clear data, regulatory, and contractual answers is ready to be funded. A project without them is a request to approve a plan, not a decision. Applied consistently, this discipline produces a portfolio of AI investments where the committee can tell the board, quarter over quarter, what the aggregate exposure is.
Related Insights
Sources