ACFE Insights Blog

Communicating AI Risks to Employees — And Why Now is the Time to Get Started

While many organizations are still adjusting to AI-driven risks introduced by industry leaders like OpenAI and Google, a new wave of AI tools like DeepSeek presents fresh challenges that organizations must address.

By Rihonna Scoggins February 2025 Duration: 3-minute read
Please sign in to save this to your favorites.
The rapid development of generative artificial intelligence (GenAI) platforms continues to reshape the business landscape. While many organizations are still adjusting to AI-driven risks introduced by industry leaders like OpenAI and Google, a new wave of AI tools like DeepSeek presents fresh challenges that organizations must address. 

The Emergence of DeepSeek and What It Means for Businesses 

DeepSeek, a Chinese AI startup, recently made headlines for its powerful new language model. Like ChatGPT and Claude, DeepSeek promises advanced AI capabilities, but its privacy practices have raised significant concerns. Reports indicate that the platform collects keystroke data and routes information through Chinese servers, sparking worries about data security and regulatory compliance. 

This is not an isolated case. The proliferation of AI startups means that new GenAI tools and updates are rapidly launching, often without the robust security and ethical frameworks that larger established providers have in place. This introduces new risks that employees, often eager to try the latest and greatest AI tools, may not fully understand. 

Why AI Risks Are Just Getting Started 

The financial and fraud-related risks of AI misuse are well-documented, with a recent warning from Wall Street regulators highlighting how GenAI is increasingly being exploited in scams and financial fraud

Fraudsters are leveraging AI to create highly convincing phishing attacks, deepfake videos and synthetic identities, often with minimal effort. Organizations need to recognize that just because an AI platform is new doesn’t mean it’s any safer than previous iterations. In fact, newer models may introduce even more risks due to looser regulatory oversight and a lack of transparency regarding data handling. 

The Importance of Proactive Employee Communication 

With AI risks evolving rapidly, organizations must act now to educate employees. Employees may not intentionally put sensitive company data at risk. but without clear guidelines, uploading proprietary information to AI chatbots, sharing confidential details through unsecured tools or falling victim to AI-powered fraud schemes opens up organizations to potential harm. 

To mitigate these risks, organizations should: 

  • Educate employees on AI risks: Regular training on AI security, privacy and fraud risks should be an integral part of workplace education. 
  • Establish clear AI usage policies: Employees need to understand which AI tools are approved, what data can be shared and the consequences of misusing these tools. 
  • Monitor for AI-driven fraud threats: Organizations should stay informed about the latest AI fraud schemes and ensure employees are aware of evolving tactics. 

Preparing Employees for Change 

For organizations looking to strengthen their fraud risk management strategies, the ACFE’s Employee Fraud Awareness Training program provides a structured approach to educating employees on fraud risks, ethical decision-making and compliance issues. As AI-powered fraud techniques become more sophisticated, organizations that invest in employee awareness today will be better equipped to mitigate future risks. 

The emergence of DeepSeek and other next-generation AI platforms underscores the urgency of addressing AI risks now. Employees are the first line of defense against AI-driven fraud but without the right guidance, they can also be a point of vulnerability. Organizations that take proactive steps through education, policies and structured training will be better prepared to navigate the challenges posed by AI in the years ahead. 
Topic:
Tags: