During a recent meeting, I was looking at a whiteboard on which an accounting malpractice attorney had mapped out a defense case for an auditor who had failed to detect a large billing scheme fraud. It was a classic combination of non-segregation of duties coupled with the misguided perception that a trusted employee would never steal from the organization. Unfortunately for the company and the auditor, the "trusted employee" had walked off with close to a million dollars from a credit card account; that loss was very material to the financial statements.
The attorney asked me how the auditor, if he had had the skills to complete the computerized test, would have been able to analyze data to detect the fraud. I explained that the auditor could have easily exported data to Excel. And then by running a quick vendor trend report, which would have compared this year's purchases to last, he would have seen that the credit card vendor account had ballooned to close to a million dollars in expenses in the current year.
The fraudster had simply written checks for her credit card bills. She had easily committed and hid the fraud because she had the authority to process invoices, write checks, and maintain the bank statement.
Because the attorney was defending the auditor and saw how quickly the auditor could have found the fraud with the computer tool, he asked me to not continue the discussion because of his sudden onset of heartburn.
This meeting forced me to realize that there's a definite methodology to data analysis in proactively detecting fraud. Even if the auditor had downloaded the company's accounts payable data, he still wouldn't have known how to manipulate it to find the questionable trends pointing to the misappropriation. Though the process I discuss in this column isn't perfect by any stretch, it's also not difficult and presents a consistent way to minimize fraud risk.
The process relies on specific reports for a given accounting area that have a track record in detecting fraud. This should follow a formal risk assessment process such as that promulgated by Auditing Standard No. 99, "Consideration of Fraud in a Financial Statement Audit." Internal controls, or the lack thereof surrounding the payment process in the preceding case, would have been a key contributor to highlighting a given area as a fraud risk. Then a "what can go wrong" brainstorming session would have highlighted the key fraud schemes and associated data analysis reports. For example, a fraud examiner who's looking for a billing scheme normally will perform the classic test of matching the addresses between the employee and vendor master files.
However, this test wouldn't have found a material fraud in the preceding case because the credit card company had a legitimate business address. Therefore, while specific reports do have a better chance based on past experience of catching the fraudster, they won't detect the uncommon schemes that could sink an organization. Let's face it - any good fraudster knows what's commonly searched for and will develop schemes that are missed by such detection reports. To be truly effective, the data analysis methodology must also complete a combination of contextual summary, abnormal entry, time trend, and statistical size analysis. I explain each of these tests with the following examples. The tests are laid out from specific (contextual summary) to general (statistical size).
Contextual summary - Detecting fraud in transactional data sometimes can be as difficult as heating the ocean to a boil. To ease this effort, summarize data by the various contexts available to the fraud examiner. For instance, a retail operation runs the risk of employees processing false customer returns and pocketing the related cash. While this may be easy to detect in one store with three employees, it's almost impossible to detect if there are hundreds of employees spread across multiple stores. By downloading sales activity and summarizing the sale returns by employee, you can develop a top 10 employee list with corresponding highest return number and amounts. Accomplish this through Microsoft Excel's Pivot Tables (see my Nov./Dec. 2005 column) or more simply through ActiveData for Excel,TM an add-in to Excel that can quickly summarize sales information through its Group Summary command. (Select Analysis > Group Summary from the Excel menu bar once Active Data is installed.) (See Figure 1.)
[Figure 1 is no longer available — Ed.]
Other data summarizations include transaction type, salesperson, location, and geography, to name a few.
Abnormal entry - Input errors can lead to a host of problems within computer systems and highlight ineffective system input controls aimed at preventing inappropriate data entry. A list of such entries for extraction and further analysis include:
- blank fields for key data points such as vendor tax identification number or address;
- invalid entered numbers (such as Social Security numbers) that are filled with all 1s or 9s;
- negative inventory unit prices or quantities; and
- transaction types that aren't on the approved list.
One of the easiest ways to detect inappropriate entries is by filtering the field in question with a tool like Microsoft Excel's Auto Filter. (Select the header row of the data and then Data > Auto Filter from the Excel menu bar.) To then find blanks in the field, select the drop down arrow next to that field and scroll down to the word Blanks.
[Figure 2 is no longer available — Ed.]
In addition to filtering blank fields, the Custom option within this Auto Filter's drop-down list will allow for many types of more precise filtering.
Time trend - Time trending is probably the most widely used approach to analyze financial information such as the classic review of account balance changes between years. While such reports are regularly run for financial statement balances, the scope of these reports should be extended to vendor purchases by year, customer sales by month, and inventory adjustments by quarter, to name a few. More sophisticated time trend analysis of the revenue cycle can be completed in which the first three quarters of sales can be related to the fourth quarter as explained. (Again, see my Nov./Dec. 2005 column.)
Statistical size analysis - Sometimes errors and fraud can be found within data files by simply reviewing the overall occurrences of numbers in the file. One useful tool for producing such reports is ActiveData for Excel (mentioned above) that can produce descriptive statistics, stratifications, agings, and Benford's Law and digital analysis (further explained in "Benford's Law Made Easy," by David G. Banks, CFE, CIA, in the Sept./Oct. 1999 issue of The White Paper).
[Figure 3 is no longer available — Ed.]
For example, Figure 4 is a Descriptive Statistics report run in ActiveData (select Analysis > Descriptive Statistics from the Excel menu bar after Active Data is installed) that can quickly identify questionable average, maximum, or minimum values within a population.
[Figure 4 is no longer available — Ed.]
Capturing the elusive
Fraud is elusive. Fraudsters are always concocting new schemes to fool companies and operations are becoming so digital that effective reviews require a data analysis methodology such as the one presented here.
Richard B. Lanza, CFE, CPA-CITP, PMP, president of Cash Recovery Partners, L.L.C., in Lake Hopatcong, N.J., provides audit technology and project management assistance to companies. He focuses much of his time in developing computerized audit and fraud tests.
The Association of Certified Fraud Examiners assumes sole copyright of any article published on ACFE.com. ACFE follows a policy of exclusive publication. Permission of the publisher is required before an article can be copied or reproduced. Requests for reprinting an article in any form must be e-mailed to: FraudMagazine@ACFE.com.