top of page

Accounting Firm's AI Caught Telling Customers About Each Others' Financial Records

ree


In January 2025 software maker Sage Group quietly took its new AI assistant, Sage Copilot, offline. The company had unveiled Copilot in February 2024 as a “trusted team member” that could automate routine accounting tasks and “recommend ways for customers to create more time and space”. It marketed the assistant as an invite‑only product built with robust encryption, access controls and compliance with data‑protection regulations. But within months of the launch, the promise of reliability collided with reality.


What happened?


According to reporting by The Register, a customer who asked Sage Copilot for a list of recent invoices discovered that the chatbot pulled data from other customers’ accounts. Sage confirmed that it “briefly paused Sage Copilot” after discovering a “minor issue involving a small amount of customers”. In essence, the AI returned unrelated business information to multiple users. A spokesperson insisted that no invoices were exposed and that the fix was implemented during a brief outage.


From the outside, this might seem like a harmless glitch. The service was still in early access and used by a “small number of customers”. Yet the error illustrates a fundamental challenge with AI systems embedded in multi‑tenant SaaS platforms: the boundary between one customer’s data and another’s exists only in code. Even though human accountants often work with multiple clients, they intuitively understand client confidentiality. An AI assistant must rely on explicit safeguards to avoid cross‑account disclosure, and those safeguards failed in this incident, allowing invoice lists from other businesses to be presented when asked.


The hidden dangers of “intelligent” accounting software


Blake Oliver, a CPA and host of The Accounting Podcast, argued that the Sage incident exposes the “big security challenges” of integrating AI into business systems. When the same large‑language model (LLM) serves multiple companies, it could inadvertently reveal sensitive information unless rigid data partitions are engineered from the outset. Human accountants know not to disclose client details; an AI system does not share that innate understanding. Relying on post‑hoc filters or trust policies is risky. Oliver warns that as accounting and fintech firms rush to implement AI, similar breaches are likely.

The Sage misfire fits into a broader pattern of AI systems behaving unpredictably. The Register notes that AI models regularly generate incorrect or hallucinated outputs and often require human verification. Recent examples include Apple temporarily suspending its AI news summaries after accuracy concerns, Air Canada paying restitution when its chatbot misinformed a passenger about bereavement fares, and McDonald’s ending a drive‑through AI pilot after customers received wrong orders. Other incidents involve chatbots agreeing to sell a Chevy Tahoe for $1 due to prompt engineering or an AI property‑valuation model that triggered a $304 million write‑down. These cautionary tales highlight the fragility of AI systems and the real‑world costs when they misbehave.


Lessons for vendors and customers


Sage’s response—pausing the service, investigating and claiming no sensitive invoices were exposed—is a necessary first step. However, describing the incident as merely a “minor issue” downplays the systemic risk. When an AI is trained on or gains access to multiple customer datasets, architecture matters. Developers must design multi‑tenant AI systems so that queries cannot inadvertently retrieve data from other tenants. This involves more than encrypting data; it requires strict isolation between tenant contexts, rigorous testing of retrieval prompts and oversight by compliance experts.


For customers evaluating AI tools, this episode underscores the importance of understanding how vendors safeguard data. Asking questions about data segregation, model training datasets and retrieval mechanisms is just as important as assessing features or pricing. Companies should pilot AI capabilities cautiously, limit early access to non‑sensitive use cases and institute human review until they trust the technology.


For more information, please click here.

 
 
 

Barrington Ice House Mall

200 Applebee St. #216 (upstairs)

Barrington, IL 60010

Elmhurst City Centre

120 N. York St.

Elmhurst, IL 60126

  • YouTube Social  Icon
  • LinkedIn Social Icon

847.737.8111  •  Info@EdwardTechnology.com

YouTube

LinkedIn

bottom of page