The wake-up call on responsible AI
In October 2025, Deloitte Australia found itself in an uncomfortable spotlight. A report delivered to the Australian government—price tag: $290,000—was riddled with AI-generated errors and hallucinations that make you wonder if anyone read the thing before hitting send.
Deloitte Australia agreed to a partial refund. The Australian Financial Review had a field day. And finance professionals everywhere got a reminder about the importance of the human touch in a world moving towards AI.
That being said, this wasn’t necessarily a technological failure. It was a process failure.
The real problem isn’t AI hallucinations
AI hallucinations are well-documented. ChatGPT’s creator, OpenAI, openly discusses how AI models can produce outputs that are “nonsensical or altogether inaccurate” when trained on biased or incomplete data.
But the Deloitte incident reveals something more fundamental: when you treat AI outputs as finished work instead of first drafts, you’re setting yourself up for expensive mistakes.
According to a KPMG study, nearly 60% of employees admit to making mistakes due to AI errors. Even more concerning? About half use AI at work without knowing if it’s allowed, and more than 40% knowingly use it improperly. The common thread is a lack of proper guardrails.
What finance teams need to know before deploying AI
If you’re a CFO or controller exploring AI tools to automate reporting, speed up close processes, or improve forecasting, the Deloitte situation offers three critical lessons:
1. Your data foundation matters more than your AI tool
AI is only as reliable as the data it’s trained on. If your financial data lives across disconnected systems—ERPs, spreadsheets, billing platforms, CRMs—AI tools have no consistent source of truth to work from.
This is where the architecture of your accounting system matters. AI doesn’t fix bad data. It amplifies it.
A unified data model—where all financial transactions, customer records, and operational metrics live in one system—gives AI the clean, consistent foundation it needs to deliver value instead of hallucinations.
2. Human judgment isn’t optional (and never will be)
As Georgia Tech professor Nikki MacKenzie put it in response to the Deloitte incident: “Accountants have to own the work, check the output, and apply their judgment rather than copy and paste whatever the system produces.”
This means:
- Reviewing AI-generated reports line by line before they go to stakeholders
- Cross-checking calculations and cited sources
- Understanding the logic behind AI recommendations (not just accepting them)
Your finance team’s expertise becomes even more important when AI is added into the mix. This should not cause fear but instead remind us of how valuable people are to this type of work; AI is just another tool in the toolbelt—not a replacement.
3. Process controls matter as much as technology controls
The most sophisticated AI tool in the world won’t protect you if your team doesn’t have clear protocols for:
- When AI outputs need human review
- Who’s responsible for verifying accuracy
- What documentation is required before AI-generated work goes external
- How to flag and correct errors when they inevitably happen
These aren’t “AI problems.” They’re accounting fundamentals. The same controls you’d apply to any automation should apply to AI.
The competitive advantage isn’t speed—it’s safety
Here’s what won’t age well: racing to deploy AI faster than your competitors without building the infrastructure to use it safely.
Here’s what will: implementing AI thoughtfully, with proper data foundations and clear accountability.
The companies that will win with AI aren’t the ones adopting it fastest. They’re the ones who:
- Built unified data architectures before layering AI on top
- Established clear review processes for AI outputs
- Trained their teams to use AI as a productivity multiplier, not a replacement for expertise
What this means for your finance team
If you’re exploring AI tools for finance, start with these questions:
About your data:
- Where does our financial data currently live?
- How many systems do we need to pull from to get a complete picture?
- How much manual work goes into reconciling data across systems?
About your processes:
- Who will be responsible for reviewing AI-generated outputs?
- What approval workflows do we need before AI-generated reports go external?
- How will we document and correct AI errors when they occur?
About your team:
- Does our finance team understand how the AI tools we’re considering actually work?
- Are we clear about where AI adds value vs. where it creates unnecessary risk?
- Do we have the bandwidth and budget to properly implement and monitor new AI tools?
AI isn’t going anywhere. But neither are the principles that make finance teams effective: accurate data, proper controls, and human judgment.
Want to learn more about how to execute AI properly in accounting? Watch the webinar: Why most AI fails in finance (and how to fix it).
See Accounting Seed in action
See how accounting on Salesforce can eliminate the need for costly integrations—and silos of mismatched information—by sharing the same database as your CRM.