Tag Archives: data analytics

Dr. Fraudster & the Billing Anomaly Continuum

healthcare-fraudThis month’s member’s lecture on Medicare and Medicaid Fraud triggered a couple of Chapter member requests for more specifics about how health care fraud detection analytics work in actual practice.

It’s a truism within the specialty of data analytics having to do with health care billing data that the harder you work on the front end, the more successful you’ll be in materializing information that will generate productive results on the back end.  Indeed, in the output of health care analytics applications, fraud examiners and health care auditors now have a new set of increasingly powerful tools to use in the audit and investigation of all types of fraud generally and of health care fraud specifically; I’m referring, of course, to analytically supported analysis of what’s called the billing anomaly continuum.

The use of the anomaly continuum in the general investigative process starts with the initial process of detection, proceeds to investigation and mitigation and then (depending on the severity of the case) can lead to the follow-on phases of prevention, response and recovery.   We’ll only discuss the first three phases here as most relevant for the fraud examination process and leave the prevention, response and recovery phases for a later post.

Detection is the discovery of clues within the data.  The process involves taking individual data segments related to the whole health care process (from the initial provision of care by the health care provider all the way to the billing and payment for that care by the insurance provider) and blending them into one data source for seamless analysis.  Any anomalies in the data can then be noted.  The output is then evaluated for either response or for follow-up investigation.  It is these identified anomalies that will go on at the end of the present investigative process to feed the detection database for future analysis.

As an example of an actual Medicare case, let’s say we have a health care provider whom we’ll call Dr. Fraudster, some of whose billing data reveals a higher than average percentage of complicated (and costly) patient visits. It also seems that Dr. Fraudster apparently generated some of this billings while travelling outside the country.  There were also referred patient visits to chiropractors, acupuncturists, massage therapists, nutritionists and personal trainers at a local gym whose services were also billed under Dr. Fraudster’s tax ID number as well as under standard MD Current Procedural Terminology (CPT) visit codes.  In addition, a Dr. Outlander, a staff physician, and an unlicensed doctor, was on Dr. Fraudster’s staff and billed for $5 an hour.  Besides Outlander, a Dr. Absent was noted as billing out of Dr. Fraudster’s clinic even though he was no longer associated with the clinic.

First off, in the initial detection phase, its seems Dr. Fraudster’s high-volume activity flagged an edit function that tracks an above-average practice growth rate without the addition of new staff on the claim form.  Another anomalous activity picked up was the appearance of wellness services presented as illness based services.  Also the billed provision of services while travelling is also certainly anomalous.

The following investigation phase involves ascertaining whether various activities or statements are true.  In Dr. Fraudster’s case, evidence to collect regarding his on-staff associate, Dr. Outlander, may include confirmation of license status, if any; educational training, clinic marketing materials and payroll records.  The high percentage of complicated visits and the foreign travel issues need to be broken down and each activity analyzed separately in full detail.  If Dr. Fraudster truly has a high complication patient population, most likely these patients would be receiving some type of prescription regime.  The lack of a diagnosis requirement with associated prescriptions in this case limited the scope of the real-life investigation.  Was Dr. Fraudster prescribing medications with no basis?  If he uses an unlicensed Doctor on his staff, presents wellness services as illness related services, and sees himself (perhaps) as a caring doctor getting reluctant insurance companies to pay for alternative health treatments, what other alternative treatment might he be providing with prescribed medications?  Also, Dr. Fraudster had to know that the bills submitted during his foreign travels were false.  Statistical analysis in addition to clinical analysis of the medical records by actual provider and travel records would provide a strong argument that the doctor had intent to misrepresent his claims.

The mitigation phase typically builds on issues noted within the detection and investigation phases.  Mitigation is the process of reducing or making a certain set of circumstances less severe.  In the case of Dr. Fraudster, mitigation occurred in the form of prosecution.  Dr. Fraudster was convicted of false claims and removed from the Medicare network as a licensed physician, thereby preventing further harm and loss.  Other applicable issues that came forward at trial were evidence of substandard care and medical unbelievability patterns (CPE codes billed that made no sense except to inflate the billing).  What made this case even more complicated was tracking down Dr. Fraudster’s assets.  Ultimately, the real-life Dr. Fraudster did receive a criminal conviction; civil lawsuits were initiated, and he ultimately lost his license.

From an analytics point of view, mitigation does not stop at the point of conviction of the perpetrator.  The findings regarding all individual anomalies identified in the case should be followed up with adjustment of the insurance company’s administrative adjudication and edit procedures (Medicare was the third party claims payer in this case).  What this means is that feedback from every fraud case should be fed back into the analytics system.  Incorporating the patterns of Dr. Fraudster’s fraud into the Medicare Fraud Prevention Model will help to prevent or minimize future similar occurrences, help find currently on-going similar schemes elsewhere with other providers and reduce the time it takes to discover these other schemes.  A complete mitigation process also feeds detection by reducing the amount of investigative time required to make the existence of a fraud known.

As practicing fraud examiners, we are provided by the ACFE with an examination methodology quite powerful in its ability to extend and support all three phases of the health care fraud anomaly identification process presented above.  There are essentially three tools available to the fraud examiner in every health care fraud examination, all of which can significantly extend the value of the overall analytics based health care fraud investigative process.  The first is interviewing – the process of obtaining relevant information about the matter from those with knowledge of it.  The second is supporting documents – the examiner is skilled at examining financial statements, books and records.   The examiner also knows the legal ramifications of the evidence and how to maintain the chain of custody over documents.  The third is observation – the examiner is often placed in a position where s/he can observe behavior, search for displays of wealth and, in some instances, even observe specific offenses.

Dovetailing the work of the fraud examiner with that of the healthcare analytics team is a win for both parties to any healthcare fraud investigation and represents a considerable strengthening of the entire long term healthcare fraud mitigation process.

Mining the General Ledger

miningI was chatting via Skype over this last week-end with a former officer of our Chapter who left the Richmond area many years ago to found his own highly successful forensic accounting practice on the west coast.  During our conversation, he remarked that he never fails to intensively indoctrinate trainees new to his organization in an understanding of the primary importance of the general ledger in any investigation of financial fraud.  With a good sense of those areas of the financial statements most vulnerable to fraud, and with whatever clues the investigative team has gleaned from an initial set of interviews focusing on those accounting entries initially arousing suspicion, he tells his trainees that they’re ready to turn their attention to a place with the potential to provide a cornucopia of useful information. That place is the client firm’s own accounting system general ledger.

My old colleague pointed out that for a fraud examiner or forensic accountant on the search for fraud, there are several great things about the general ledger. One is that virtually all sophisticated financial reporting systems have one. Another is that, as the primary accounting tool of the company, it reflects every transaction the company has entered.

He went on to say that unless the fraud has been perpetrated simply through last-minute topside adjustments, it’s captured in the general ledger somewhere. What’s vital is knowing how, and where, to look. The important thing to keep in mind is the way the ACFE tells us that financial fraud starts and grows. That guidance says that ledger entries entered at particular points of time — say, the final days leading up to the end of a quarter — are more likely to reflect falsified information than entries made at earlier points. Beyond that, a fraudulent general ledger entry in the closing days of a quarter may reflect unusual characteristics. For example, the amounts involved say, having been determined, as they were, by the need to cross a certain numerical threshold rather than by a legitimate business transaction may by their very nature look a bit strange.  Perhaps they’re larger than might be expected or rounded off. It also may be that unusual corporate personnel were involved—executives who would not normally be involved in general ledger entries. Or, if the manipulating executives are not thinking far enough ahead, the documentation behind the journal entries themselves may not be complete or free from suspicion. For example, a non-routine, unusually large ledger entry with rounded numbers that was atypically made at the direction of a senior executive two days before the end of a quarter should arouse some suspicion.

Indeed, once a suspicious general ledger entry has been identified, determining its legitimacy can be fairly straightforward. Sometimes it might involve simply a conversation with the employee who physically made the entry.  My colleague went on to point out that, in his experience, senior executives seeking to perpetrate financial fraud often suffer from a significant handicap: they don’t know how to make entries to the accounting system. To see that a fraudulent entry is made, they have to ask some employee sitting at a computer screen somewhere to do it for them, someone who, if properly trained, may want to fully understand the support for a non-routine transaction coming from an unusual source. Of course, if the employee’s boss simply orders him or her to make the entry, resistance may be awkward. But, if suspicions are aroused, the direction to enter the entry may stick in the employee’s memory, giving the employee the ability to later describe in convincing detail exactly how the ledger entry came to be made. Or, concerned about the implications and the appearance of his own complicity, the employee may include with the journal entry an explanation that captures his skepticism. The senior executive directing the entry may be oblivious to all this. S/he thinks she has successfully adjusted the general ledger to create the needed earnings. Little does she know that within the ledger entry the data-entering employee has embedded incriminating evidence for the forensic accountants to find.

The general ledger may reflect as well large transactions that simply by their nature are suspicious. The investigators may want to ask the executive responsible about such a transaction’s business purpose, the underlying terms, the timing, and the nature of the negotiations. Transaction documentation might be compared to the general ledger’s entry to make sure that nothing was left out or changed. If feasible, the forensic accountants may even want to reach out to the entry’s counter-party to explore whether there are any unrecorded terms in side letters or otherwise undisclosed aspects of the transaction.

As we all know, an investigation will not ordinarily stop with clues gleaned from the general ledger. For example, frequently a useful step is to assess the extent to which a company has accounted for significant or suspicious transactions in accordance with their underlying terms. Such scrutiny may include a search for undisclosed terms, such as those that may be included in side letters or pursuant to oral agreements. In searching for such things, the investigators will seek to cast a wide net and may try to coax helpful information from knowledgeable company personnel outside the accounting function. As our former Central Virginia Chapter officer put it, “I like to talk to the guys on the loading dock. They’ll tell you anything.”

As I’m sure most readers of this blog are aware, while such forensic accounting techniques, and there are many others, can be undertaken independently of what employee interviews turn up, usually the two will go hand in hand. For example, an interview of one employee might yield suspicions about a particular journal entry, which is then dug out of the accounting system and itself investigated. Or an automated search of the general ledger may yield evidence of a suspicious transaction, resulting in additional interviews of employees. Before long, the investigative trail may look like a roadmap of Washington DC. Clues are discovered, cross-checked against other information, and explored further. Employees are examined on entries and, as additional information surfaces, examined again. As the investigation progresses, shapes start to appear in the fog. Patterns emerge. And those executives not being completely candid look increasingly suspicious.

So, with thanks to our good friend for sharing, in summary, if there is predication of a fraud, what sorts of things might a thorough forensic examination of the general ledger reveal?

–The journal entries that the company recorded to implement the fraud;

–The dates on which the company recorded fraudulent transactions;

–The sources for the amounts recorded (e.g., an automated sub-accounting system, such as purchasing or treasury, versus a manually prepared journal entry);

–The company employee responsible for entering the journal entries into the accounting system;

–Adjusting journal entries that may have been recorded.

Where the Money Is

bank-robberyOne of the followers of our Central Virginia Chapter’s group on LinkedIn is a bank auditor heavily engaged in his organization’s analytics based fraud control program.  He was kind enough to share some of his thoughts regarding his organization’s sophisticated anti-fraud data modelling program as material for this blog post.

Our LinkedIn connection reports that, in his opinion, getting fraud data accurately captured, categorized, and stored is the first, vitally important challenge to using data-driven technology to combat fraud losses. This might seem relatively easy to those not directly involved in the process but, experience quickly reveals that having fraud related data stored reliably over a long period of time and in a readily accessible format represents a significant challenge requiring a systematic approach at all levels of any organization serious about the effective application of analytically supported fraud management. The idea of any single piece of data being of potential importance to addressing a problem is a relatively new concept in the history of banking and of most other types of financial enterprises.

Accumulating accurate data starts with an overall vision of how the multiple steps in the process connect to affect the outcome. It’s important for every member of the fraud control team to understand how important each process pre-defined step is in capturing the information correctly — from the person who is responsible for risk management in the organization to the people who run the fraud analytics program to the person who designs the data layout to the person who enters the data. Even a customer service analyst or a fraud analyst not marking a certain type of transaction correctly as fraud can have an on-going impact on developing an accurate fraud control system. It really helps to establish rigorous processes of data entry on the front end and to explain to all players exactly why those specific processes are in place. Process without communication and communication without process both are unlikely to produce desirable results. In order to understand the importance of recording fraud information correctly, it’s important for management to communicate to all some general understanding about how a data-driven detection system (whether it’s based on simple rules or on sophisticated models) is developed.

Our connection goes on to say that even after an organization has implemented a fraud detection system that is based on sophisticated techniques and that can execute effectively in real time, it’s important for the operational staff to use the output recommendations of the system effectively. There are three ways that fraud management can improve results within even a highly sophisticated system like that of our LinkedIn connection.

The first strategy is never to allow operational staff to second-guess a sophisticated model at will. Very often, a model score of 900 (let’s say this is an indicator of very high fraud risk), when combined with some decision keys and sometimes on its own, can perform extremely well as a fraud predictor. It’s good practice to use the scores at this high risk range generated by a tested model as is and not allow individual analysts to adjust it further. This policy will have to be completely understood and controlled at the operational level. Using a well-developed fraud score as is without watering it down is one of the most important operational strategies for the long term success of any model. Application of this rule also makes it simpler to identify instances of model scoring failure by rendering them free of any subsequent analyst adjustments.

Second, fraud analysts will have to be trained to use the scores and the reason codes (reason codes explain why the score is indicative of risk) effectively in operations. Typically, this is done by writing some rules in operations that incorporate the scores and reason codes as decision keys. In the fraud management world, these rules are generally referred to as strategies. It’s extremely important to ensure strategies are applied uniformly by all fraud analysts. It’s also essential to closely monitor how the fraud analysts are operating using the scores and strategies.

Third, it’s very important to train the analysts to mark transactions that are confirmed or reported to be fraudulent by the organization’s customers accurately in their data store.

All three of these strategies may seem very straight forward to accomplish, but in practical terms, they are not that easy without a lot of planning, time, and energy. A superior fraud detection system can be rendered almost useless if it is not used correctly. It is extremely important to allow the right level of employee to exercise the right level of judgment.  Again, individual fraud analysts should not be allowed to second-guess the efficacy of a fraud score that is the result of a sophisticated model. Similarly, planners of operations should take into account all practical limitations while coming up with fraud strategies (fraud scenarios). Ensuring that all of this gets done the right way with the right emphasis ultimately leads the organization to good, effective fraud management.

At the heart of any fraud detection system is a rule or a model that attempts to detect a behavior that has been observed repeatedly in various frequencies in the past and classifies it as fraud or non-fraud with a certain rank ordering. We would like to figure out this behavior scenario in advance and stop it in its tracks. What we observe from historical data and our experience needs be converted to some sort of a rule that can be systematically applied to the data real-time in the future. We expect that these rules or models will improve our chance of detecting aberrations in behavior and help us distinguish between genuine customers and fraudsters in a timely manner. The goal is to stop the bleeding of cash from the account and to accomplish that as close to the start of the fraud episode as we can. If banks can accurately identify early indicators of on-going fraud, significant losses can be avoided.

In statistical terms, what we define as a fraud scenario would be the dependent variable or the variable we are trying to predict (or detect) using a model. We would try to use a few independent variables (as many of the variables used in the model tend to have some dependency on each other in real life) to detect fraud. Fundamentally, at this stage we are trying to model the fraud scenario using these independent variables. Typically, a model attempts to detect fraud as opposed to predict fraud. We are not trying to say that fraud is likely to happen on this entity in the future; rather, we are trying to determine whether fraud is likely happening at the present moment, and the goal of the fraud model is to identify this as close to the time that the fraud starts as possible.

In credit risk management, we try to predict if there will likely be serious delinquency or default risk in the future, based on the behavior exhibited in the entity today. With respect to detecting fraud, during the model-building process, not having accurate fraud data is akin to not knowing what the target is in a shooting range. If a model or rule is built on data that is only 75 percent accurate, it is going to cause the model’s accuracy and effectiveness to be suspect as well. There are two sides to this problem.  Suppose we mark 25 percent of the fraudulent transactions inaccurately as non-fraud or good transactions. Not only are we missing out on learning from a significant portion of fraudulent behavior, by misclassifying it as non-fraud, the misclassification leads to the model assuming the behavior is actually good behavior. Hence, misclassification of data affects both sides of the equation. Accurate fraud data is fundamental to addressing the fraud problem effectively.

So, in summary, collecting accurate fraud data is not the responsibility of just one set of people in any organization. The entire mind-set of the organization should be geared around collecting, preserving, and using this valuable resource effectively. Interestingly, our LinkedIn connection concludes, the fraud data challenges faced by a number of other industries are very similar to those faced by financial institutions such as his own. Banks are probably further along in fraud management and can provide a number of pointers to other industries, but fundamentally, the problem is the same everywhere. Hence, a number of techniques he details in this post are applicable to a number of industries, even though most of his experience is bank based. As fraud examiners and forensic accountants, we will no doubt witness the impact of the application of analytically based fraud risk management by an ever multiplying number of client industrial types.

Informed Analytics

data-analytics_2by Michael Bret Hood,
21st Century Learning & Consulting,
LLC, University of Virginia, Retired FBI

I recently had a conversation with an old friend who is an accounting professor at a large southern university.

We were discussing my impending retirement and the surprising difficulty I am having in finding a corporate fraud investigation position. One of the things we discussed was the recent trend to hire mathematicians and statisticians as directors of fraud detection and risk control programs. Knowing that I could be biased, I asked the professor if he had seen the same thing. He replied that he had and then uttered, “What a foolish mistake!”

While neither of us harbors any ill will toward the community of mathematicians and statisticians (they probably are a lot smarter and way more technologically gifted than us), fraud detection and fraud prevention is so much more than the numbers and related informational sub‐sets. Sun Tzu in The Art of War said, “Know your enemy and know yourself and you can fight a hundred battles without disaster.” What every fraud contains that data analytics can never account for is the human behavior element. Absent the analytical process directly involving someone with the expertise of knowing how fraudsters operate as well as someone who understands victimology, significant weaknesses will almost always be introduced to the analysis. Matt Asay, in his InformationWeek article ‘8 Reasons Big Data Projects Fail’, understands this inherent flaw. “Too many organizations hire data scientists who might be math and programming geniuses but who lack the most important component: domain knowledge.”

The current perception is that data analytics causes the fraudulent patterns in organizations to just suddenly become exposed. Unfortunately, the pattern algorithms and programs created by data scientists are not magic elixirs. Just like the old gold miners did, someone has to sort through the data to ensure the patterns are both relevant and valid. In his article ‘What Data Can’t Do’, author David Brooks says the following, “Data is always constructed to someone’s predispositions and values. The end result looks disinterested, but in reality, there are value choices all the way through, from construction to interpretation.” The old adage suggesting inferior input equals inferior output certainly applies with equal force to data analytics today.

Data analytics has certainly had its successes. Credit card companies have been able to stem losses based on intricate and real‐time analysis of current trends, which unaided human reviewers would certainly be unable to produce manually. In other cases, data analytics have failed. “Google Flu Trends unsuccessfully predicted flu outbreaks in 2013 when it ended up forecasting twice as many cases of flu as the Center for Disease Control and Prevention reported.”  Data analytics as applied by even the best data scientists can’t always quantify the human element in their computations. No one should say that data analytics are not useful tools to be leveraged in any fraud investigation. In fact, they are most beneficial. However, implementing data analytics does not place an impenetrable anti-fraud fortress around your data and/or your money. Sometimes it takes a combination of data analytics and experienced professionals to produce the best results.

In one business, data analytics were deployed using Benford’s Law in such a way that an insider‐led tax refund fraud scheme was uncovered, saving the company millions of dollars. This data set, however, would never have been chosen for analysis were it not for forensic accountants who noticed a variance in the numbers they sampled. Fraud is a crime that always includes an unmistakable human element represented in the actions and reactions of both the perpetrator and the victim. Data analytics, although extremely useful will never be able to take into account the full dynamic range of emotions and decision making of which human beings are capable. Businesses have started to realize this problem as evidenced by a recent Gartner survey where the author claims, “Big data was so yesterday; it’s all about algorithms today.”

While it may cost organizations a little more in salary to engage the services of  experienced fraud investigators such as myself, the resultant ROI is far superior to the cost of the investment.

You Can’t Prevent What You Can’t See

uncle-samThe long, rainy Central Virginia fourth of July weekend gave me a chance to review the ACFE’s latest Report to the Nations and I was struck by what the report had to say about proactive data analytics as an element of internal control, especially as applicable to small business fraud prevention.

We’re all familiar with the data analytics performed by larger businesses of which proactive data analytic tests form only a part.  This type of analysis is accomplished with the use of sophisticated software applications that comb through massive volumes of data to determine weak spots in the control system. By analyzing data in this manner, large companies can prevent fraud from happening or detect an ongoing fraud scheme. The Report to the Nations reveals, among other things that, of the anti-fraud controls analyzed, proactive data monitoring and analysis appears to be the most effective at limiting the duration and cost of fraud schemes. By performing proactive data analysis, companies detected fraud schemes sooner, limiting the total potential loss. Data analysis is not a new concept, but, as we all know, with the increasing number of electronic transactions due to advances in technology, analyzing large volumes of data has become ever more complex and costly to implement and manage.

Companies of all sizes are accountable not only to shareholders but to lenders and government regulators.  Although small businesses are not as highly regulated by the government since they are typically not publically financed, small business leaders share the same fiduciary duty as large businesses: to protect company assets. Since, according to the ACFE, the average company loses 5% of revenue to fraud, it stands to reason that preventing losses due to fraud could increase profitability by 5%. When viewed in this light, many small businesses would benefit from taking a second look at implementing stronger fraud prevention controls.  The ACFE also reports that small businesses tend to be victims of fraud more frequently than large businesses because small businesses have limited financial and human resources. In terms of fraud prevention and detection, having fewer resources overall translates into having fewer resources dedicated to strong internal controls. The Report also states that small businesses (less than 100 employees) experience significantly larger losses percentage-wise than larger businesses (greater than 100 employees). Since small businesses do not have the resources to dedicate to fraud prevention and detection, they’re not able to detect fraud schemes as quickly, prolonging the scheme and increasing the losses to the company.

The ACFE goes on to tell us that certain controls are anti-fraud by nature and can prevent and detect fraud, including conducting an external audit of a set of financial statements, maintaining an internal audit department, having an independent audit committee, management review of all financial statements, providing a hotline to company employees, implementing a company code of conduct and anti-fraud policy, and practicing pro-active data monitoring. While most of these controls are common for large companies, small businesses have difficulty implementing some of them, again,  because of their limited financial and human resources.

What jumped out at me from the ACFE’s Report was that only 15% of businesses under 100 employees currently perform proactive data analysis, while 41.9% of businesses over 100 employees do. This is a sign that many small businesses could be doing a basic level of data analysis, but aren’t. The largest costs associated with data analysis are software costs and employee time to perform the analysis. With respect to employee resources, data analysis is a control that can be performed by a variety of employees, such as a financial analyst, an accountant, an external consultant, a controller, or even the CFO. The level of data analysis should always be structured to fit within the cost structure of the company. While larger companies may be able to assign a full time analyst to handle these responsibilities, smaller companies may only be able to allocate a portion of their time to this task. Given these realities, smaller businesses, need to look for basic data analysis techniques that can be easily implemented.

The most basic data analysis techniques are taught in introductory accounting courses and aren’t particularly complex: vertical analysis, horizontal analysis, liquidity ratios, and profitability ratios. Large public companies are required to prepare these type of calculations for their filings with the Securities and Exchange Commission. For small businesses, these ratios and analyses can be calculated by using two of the basic financial statements produced by any accounting software:  the income statement and the balance sheet. By comparing the results of these calculations to prior periods or to industry peers, significant variances can point to areas where fraudulent transactions may have occurred. This type of data analysis can be performed in a tabular format and the results used to create visual aids. Charts and graphs are a great way for a small business analyst to visualize variances and trends for management.

I like to point out to small business clients that all of the above calculations can be performed with Microsoft Excel and Microsoft Access. These are off-the-shelf tools that any analyst can use to perform even analytical calculations of great complexity. The availability of computing power in Excel and Access and the relatively easy access to audit tools … known as Computer Assisted Audit Techniques (CAAT), have accelerated the analytical review process generally. Combined with access to the accounting server and its related applications and to the general ledger, CAATS are very powerful tools indeed.

The next step would be to consider using more advanced data analysis programs. Microsoft Excel has many features to perform data analysis, and it is probably already installed on many computers within small enterprises. CFE’s might suggest to their clients adding the Audit Control Language (ACL) Add-In to client Excel installations to add yet another layer of advanced analysis that will help make data analytics more effective and efficient. When a small business reaches a level of profitability where it can incorporate a more advanced data analysis program,it can add a more robust tool such as IDEA or ACL Analytics. Improving controls by adding a specialized software program will require financial resources to acquire it and to train employees. It will also require the dedication of time from employees serving in the role of internal examiners for fraud like internal auditors and financial personnel. Professional organizations such as the ACFE and AICPA have dedicated their time and efforts to ensuring that companies of all sizes are aware of the threats of fraud in the workplace. One suggestion I might make to these professional organizations would be to work with accounting software developers and the current developers of proactive data analysis tools to incorporate data analysis reports into their standard products. If a small business had the ability to run an anti-fraud report as a part of their monthly management review of financial statements without having to program the report, it would save a significant amount of company resources and improve the fraud prevention program overall.

To sum up, according to Joseph T. Wells, founder of the ACFE, “data analytics have never been more important or useful to a fraud examiner. There are more places for fraud to hide, and more opportunities for fraudsters to conceal it.” Clearly there are many resources available today for small businesses of almost any size to implement proactive data analysis tools. With the significant advances in technology, exciting new anti-fraud solutions appear on the horizon almost daily; the only thing standing between them and our clients is the decision to pick them up and use them.