Innovation Update
Innovation Update

Beware the dangers of AI chatbots but embrace investigatory advantages

By Carolyn Conn, Ph.D., CFE, CPA, Zachary M. Kelley
Please sign in to save this to your favorites.

Prompts and results of AI chat queries can remain on third-party servers indefinitely. Hackers can exploit them for malevolent intentions, or you can use them to discover evidence during investigations. Here’s how to protect and utilize this valuable information.

An employee at your firm is writing a proposal for a new client. They access ChatGPT and enter a prompt that includes the client’s name, internal financial data, profit margins, budgets, sales figures and other proprietary information. They’ve now innocently exposed your client to hackers and possibly damaged your firm’s reputation.

However, here’s the inverse of artificial intelligence (AI) chatbots. Some of your firm’s products have been disappearing. You’ve discovered a suspect’s Google Gemini queries that included how and where to sell the same stolen products. You probably now have solid evidence of intent.

Witness the yin and yang of AI. Be wary of the dangers but utilize the investigatory advantages.

Risky AI chatbot business

ChatGPT and other AI tools are ubiquitous in our personal and professional lives. At home you might ask ChatGPT what you can make for dinner with the few ingredients in your pantry. And business users might feed critical information into chatbots when they’re drafting reports, generating ideas or developing corporate strategic plans. But know this: Whether you use consumer-grade or unmanaged chat platforms, both routinely store user inputs as data records on third-party servers.

Innovation Update

In just three years since ChatGPT was released in November 2022, AI chats have become sources of some of the most detailed corporate records of internal reasoning, decision-making and strategic planning that anyone could’ve imagined. However, the same qualities that make AI chat logs valuable to investigators also make them dangerously ungovernable. Most AI conversations have a shelf life. Some AI platforms store chats for three years. Other AI platforms store them if you have an active account. Still others store them unless you delete the chats manually. So, many chat prompts and responses are available indefinitely.

In April 2023, Samsung was among the first major companies to make startling global news after it discovered employees inadvertently had entered proprietary information on public ChatGPT. At the time, Samsung expressed concerns about data shared with AI chatbots “stored on servers owned by companies operating … service[s] like OpenAI, Microsoft, and Google — with no easy way to access and delete them.” In another widely publicized case, information from a corporation’s internal client project was extracted from its AI assistant within an hour of user entry. Grok, the AI chatbot developed by one of Elon Musk’s companies, raised privacy concerns when it was reported that the chatbot needed very little prompting to provide people’s home addresses and other sensitive PII.

Results from a recent report, “LayerX Enterprise AI & SaaS Data Security Report 2025,” based on browsing activity from select Fortune 500 companies, are alarming:

  • 77% of employees admitted to pasting company information into ChatGPT or similar platforms, confident in the convenience and perceived intelligence of AI assistants.
  • 82% of AI tool usage occurs through unmanaged accounts — personal or third-party logins that operate outside enterprise single sign-on (SSO) and policy enforcement.
  • Critical controls such as multifactor authentication, role-based access controls and detailed audit logs are rendered ineffective.
  • Fileless data transfers via malware [can] evade detection by traditional data-loss prevention solutions, leaving organizations blind to the true scope of data leakage.

The risks extend beyond direct data exposure. Analysts at Proton Mail Blog warn that chatbot logs often include behavioral cues, emotional triggers and decision patterns that can be exploited for targeted social engineering. AI model developers are increasingly using conversational data captured in AI chat sessions for training, which raises serious privacy and corporate-governance implications.

Chat histories can reveal both what employees know and how they think — an asset for any bad actor targeting employees for manipulation or seeking security access.

Evidentiary promise

For all the potential AI pitfalls, chat logs can reveal suspected fraudsters’ authentic thought processes and provide contemporaneous evidence of intent. However, if we don’t secure or properly govern evidence in these logs, they can become massive, vulnerable, uncontrolled archives of privileged, proprietary and sometimes incriminating material.

Traditional evidence — such as email messages, documents, drafts of documents, financial records and interview transcripts — usually doesn’t reveal a suspect’s thought processes between their intentions and actions of committing or concealing misconduct. However, chat data can provide these insights.

For example, you’re investigating a chief financial officer suspected of financial statement fraud. You can access their chatbot queries, including, “How can I adjust these figures so the financial discrepancy on the income statement looks smaller and won’t be noticed?” Fraud examiners may access employees’ chatbot queries from the employer if they make the chatbot available. A fraud examiner may need the help of legal and/or law enforcement (for a subpoena, warrant or court order) to obtain transcripts and logs from external chatbot service providers.

Or, in a different case, a disgruntled employee who likely embezzled money from the company could’ve written a chat query such as, “Draft a justification memo to my supervisor for this bonus and explain why I am upset about not receiving it.” These chat prompts reveal mindset and planning, not just outcome. Preserved and authenticated logs can vividly demonstrate knowledge, collusion, concealment and intent — an evidentiary goldmine.

A 2025 report from Cyberhaven Labs documented that employees paste sensitive data into unmanaged chatbots at a higher rate than into email or file-sharing platforms. Each interaction leaves traces, timestamps, session identifiers and metadata that can become discoverable evidence. The organizations’ challenge is to treat chat conversations, logs and related analysis as you would other forms of evidence to preserve, protect and authenticate them for a solid court case.

Innovation Update

Governance, security and chain of custody

A mature fraud risk or compliance program must now treat AI chat logs as part of its information-governance and evidentiary infrastructure. These records require protection, classification and proper life cycle management. Several legal and regulatory developments reinforce that point. In 2025, multiple U.S. states enacted laws regulating commercial AI chatbot use, some of which established consumer rights of action for harms arising from data misuse. The legal environment increasingly views chat interactions as formal business records.

Chat records require governance frameworks that address readiness, compatibility, facilitating conditions and trust. Internally, organizations should designate chat logs as sensitive data subject to encryption, access controls and explicit retention policies. When chat data is relevant to a fraud examination, preserve it with full chain-of-custody documentation, including who accessed it, when and under what authority.

Vendor contracts should also specify ownership, data retention and audit rights. Many free AI tools retain broad rights to user inputs, which create potential conflicts of ownership and confidentiality. When chat logs are introduced as evidence, fraud examiners must verify authenticity, provenance and integrity. Establishing reliability can be complex because AI platforms often modify stored conversations through updates or deletions. Chain-of-custody procedures must evolve to include these dynamic digital sources.

Implications for fraud examiners

Even though AI chat records can illuminate what occurred and how individuals reasoned their way to possible fraudulent decisions, that same transparency can expose your organization if left unprotected. Uncontrolled chat logs effectively create a “shadow ledger” of internal deliberations that internal and external bad actors could exploit.

Innovation Update

We suggest several actions during investigations. Integrate chat-log identification and preservation into standard checklists to ensure that potentially relevant data is secured. For your own professional conduct and for your firm, evaluate anomalies and content in AI tool usage that could indicate misconduct or data leakage. (See the sidebar “Don’t risk exposing sensitive information during an investigation,” at the end of this article.)

Investigative and assurance teams must coordinate with IT, legal, security and compliance departments to establish appropriate containment policies, responsibly manage records, and define clear, enforceable boundaries on acceptable AI tool use at your firm for all personnel, including yourself.

These actions provide a framework for mitigating risk and leveraging potential new sources of evidence. You will feel more comfortable discovering and preserving behavioral records embedded within AI systems.

Treat chat interactions as public records

AI chat tools, which have become a routine part of business operations and personal conduct, capture more than basic communications. They can record reasoning, decision-making and intent. That creates unique opportunities to strengthen evidentiary precision while also introducing an urgent governance challenge.

Chat logs have the potential to be transformative elements of fraud examinations. But because they can reside on public servers indefinitely, they can also become an unseen and uncontrolled risk.

The guiding principle is simple. Treat chat interactions as public records. Secure them, classify them and govern them.

Carolyn Conn, Ph.D., CFE, CPA, is a clinical associate professor in the Department of Accounting at Texas State University in San Marcos, Texas. Contact her at cc31@txstate.edu.

Zachary M. Kelley is an associate professor of instruction in the Department of Information Systems and Analytics at Texas State University in San Marcos, Texas. Contact him at zachkelley@txstate.edu.


Innovation Update

Begin Your Free 30-Day Trial

Unlock full access to Fraud Magazine and explore in-depth articles on the latest trends in fraud prevention and detection.