Category Archives: Data Analytics

MAC Documents

As our upcoming Ethics 2019 lecture for January-February 2019 makes clear, many of the most spectacular cases of fraud during the last two decades that were, at least initially, successfully concealed from auditors involved the long running falsification of documents. Bernie Madoff and Enron come especially to mind. In hindsight, the auditors involved in these individual cases failed to detect the fraud for multiple reasons, one of which was a demonstrated lack of professional skepticism coupled with a general lack of awareness.

Fraud audit and red flag testing procedures are designed to validate the authenticity of documents and the performance of internal controls. Red flag testing procedures are based on observing indicators in the internal documents and in the internal controls. In contrast, fraud audit testing procedures verify the authenticity of the representations in the documents and internal controls. While internal controls are an element of each, they are not the same as the testing procedures performed in a traditional audit. Considering that fraud audit testing procedures are the basis of the fraud audit program, the analysis of documents will differ between the fraud audit and the traditional verification audit. Business systems are driven by paper documents, both imaged paper documents and electronic documents. Approvals are handwritten, created mechanically, or created electronically through a computerized business application. Therefore, the ability to examine a document for the red flags indicative of a fraud scenario is a critical component in the process of fraud detection.

The ACFE points out that within fraud auditing, there are levels of document examination: the forensic document examination performed by a certified document examiner and the document examination performed by an independent external auditor conducting a fraud audit are distinct. Clearly, the auditor is not required to have the skills of a certified document examiner; however, the auditor should understand the difference between questioned document examination and the examination of documents for red flags.

Questioned, or forensic, document examination is the application of science to the law. The forensic document examiner, using specialized techniques, examines documents and any handwriting on the documents to establish their authenticity and to detect alterations. The American Academy of Forensic Sciences (AAFS) Questioned Document Section and the American Society of Questioned Document Examiners (ASQDE) provide guidance and standards to assurance professionals in the field of document examination. For example, the American Society for Testing and Materials, International (ASTM) Standard E444-09 (Standard Guide for Scope of Work of Forensic Document Examiners) indicates there are four components to the work of a forensic document examiner. These components are the following:

1. Establish document genuineness or non-genuineness, expose forgery, or reveal alterations, additions, or deletions.
2. Identify or eliminate persons as the source of handwriting.
3. Identify or eliminate the source of typewriting or other impression, marks, or relative evidence.
4. Write reports or give testimony, when needed, to aid the users of the examiner’s services in understanding the examiner’s findings.

CFEs will find that some forensic document examiners (FDEs) limit their work to the examination and comparison of handwriting, however, most inspect and examine the whole document in accordance with the ASTM standard.

The fraud examiner or auditor also focuses on the authenticity of the document, with two fundamental differences:

1. The degree of certainty. With forensic document examination, the forensic certainty is based on scientific principles. Fraud audit document examination is based on visual observations and informed audit experience.
2. Central focus. Fraud audit document examination focuses on the red flags associated with a hypothetical fraud scenario. Forensic document examination focuses on the genuineness of the document or handwriting under examination.

Awareness of the basic principles and objectives of forensic document examination is of assistance to any auditor or examiner in determining if, when and how to use the services of a certified document examiner in the process of conducting a fraud audit.

ACFE training indicates that documentary red flags are among the most important of all red flags. Examiners and auditors need to be aware not only of how a fraud scenario occurs, but also of how to employ the correct methodology in identifying and describing the documents related to a given scenario. These capabilities are critical as well in order to be successful in the identification of document related red flags. Specifically, a document must link to the fraud scenario and to the key controls of the involved business process(es).

The target document should be examined for the following: document condition, document format, document information, and industry standards. To these characteristics the concepts of missing, altered, and created content should be applied. The second aspect of the document examination is linking the document to the internal controls. Linking the document examination to the internal controls is a critical aspect of developing the decision tree aspect of the fraud audit program. Using a document examination methodology aids the fraud auditor in building his or her fraud audit program.

The ACFE’s acronym MAC is a useful aid to assist the auditor in identifying red flags and the corresponding audit response. The ‘M’ stands for missing, either missing the entire document or missing information on a document; the ‘A’ for altered information on a document; and the ‘C’ for created documents or information on a document. Specifically:

A missing document is a red flag. Missing documents occur because the document was never created, was destroyed, or has been misfiled. Documents are either the basis of initiating the transaction or support the transaction.

The frequency of missing documents must be linked to the fraud scenario. In some instances, missing one document may be a red flag, although typically repetition is necessary to warrant fraud audit testing procedures. The audit response should focus on the following attributes assuming the document links to a key control:

— Is the document externally or internally created? The existence of externally created documents can be confirmed with the source, assuming the source is not identified as involved in the fraud scenario.
— Is the document necessary to initiate the transaction or is the document a supporting one? Documents used to initiate a transaction had to have existed at some point; therefore, logic dictates that the document was destroyed or misfiled.
— One, two, or all three of the following questions could apply to internal documents:

• Is there a pattern of missing documents associated with the same entity?
• Is there a pattern of missing documents associated with an internal employee?
• Does the document support a key anti-fraud control, therefore being a trigger red flag, or is the missing document related to a non-key control?

With regard to missing information on a document, several questions arise, one of which is: are there tears, torn pieces, soiled areas, or charred areas that cause information to be missing? To address any of these situations, finding a similar document type is needed to determine if the intent of the document has changed because of the missing information. Another question is: is information obliterated (e.g., covered, blotted, or wiped out)? Overwriting is commonly used to obscure existing writing. Correction fluid is also a common method, but the underlying writing can be read and photographed using transmitted light from underneath the document.

Scratching out writing with a pen will obliterate writing successfully if it results in the page being torn. Spilled liquids can also obliterate writing.

‘A’, altered, pertains to changing or adding information to the original document. The information may be altered manually or through the use of desktop publishing capabilities. For example, manual changes tend to be visible through a difference in handwriting, and electronic documents would generally be altered via the software used to create the document.

Any altering of information would be detected through the same red flags as adding information. In the context of fraud, forgery is the first thing that comes to mind in any discussion of the altering of documents. Forgery is a legal term applied to fraudulent imitation. It is an alteration of writing as to convey a false impression that a document itself, not its contents, is authentic, thereby imposing a legal liability. It is an alteration of a document with the intent to defraud. It should be noted that it is possible for a document examiner to identify a document or signature as a forgery, but it is much less common for the examiner to identify the forger. This is due to the nature of handwriting, whereby a forger is attempting to imitate the writing habit of another person, thereby suppressing his own writing characteristics and style, and in essence, disguising his or her writing.

A ‘C’, or created document is any document prepared by the perpetrator of the fraud scenario. This type of changed document can include added or created documents or added and created text on a document. The document can be prepared by an external source (e.g., a vendor in an over-billing scheme) or an internal source (e.g., a purchasing agent who creates false bids).

Some signs of document creation can include the age of the document being inconsistent with the purported creation date, or the document lacking the sophistication typically associated with normal business standards. Added or created text can inserted with the use of ink or whatever type of writing instrument was used on the original. It can also be added through cutting and pasting sections of text, then photocopying the document to eliminate any outline. When pages are suspected of being added in this manner, a comparison of the type of paper used for the original and the photocopy should be made. In terms of computer-generated and machine-produced documents differences in the software used may result in textual differences.

As the MAC acronym seeks to demonstrate, fraudulent document information can be categorized as missing information, incorrect information, or information inconsistent with normal business standards. Therefore, the investigating CFE or auditor needs to have the requisite business and industry knowledge to correctly associate the appropriate red flags with the relevant documentary information consistent with the fraud scenario under investigation.

Forensic Data Analysis

As a long term advocate of big data based solutions to investigative challenges, I have been interested to see the recent application of such approaches to the ever-growing problem of data beaches. More data is stored electronically than ever before, financial data, marketing data, customer data, vendor listings, sales transactions, email correspondence, and more, and evidence of fraud can be located anywhere within those mountains of data. Unfortunately, fraudulent data often looks like legitimate data when viewed in the raw. Taking a sample and testing it might not uncover fraudulent activity. Fortunately, today’s fraud examiners have the ability to sort through piles of information by using special software and data analysis techniques. These methods can identify future trends within a certain industry, and they can be configured to identify breaks in audit control programs and anomalies in accounting records.

In general, fraud examiners perform two primary functions to explore and analyze large amounts of data: data mining and data analysis. Data mining is the science of searching large volumes of data for patterns. Data analysis refers to any statistical process used to analyze data and draw conclusions from the findings. These terms are often used interchangeably. If properly used, data analysis processes and techniques are powerful resources. They can systematically identify red flags and perform predictive modeling, detecting a fraudulent situation long before many traditional fraud investigation techniques would be able to do so.

Big data are high volume, high velocity, and/or high variety information assets that require new forms of processing to enable enhanced decision making, insight discovery, and process optimization. Simply put, big data is information of extreme size, diversity, and complexity. In addition to thinking of big data as a single set of data, fraud investigators and forensic accountants are conceptualizing about the way data grow when different data sets are connected together that might not normally be connected. Big data represents the continuous expansion of data sets, the size, variety, and speed of generation of which makes it difficult for investigators and client managements to manage and analyze.

Big data can be instrumental to the evidence gathering phase of an investigation. Distilled down to its core, how do fraud examiners gather data in an investigation? They look at documents and financial or operational data, and they interview people. The challenge is that people often gravitate to the areas with which they are most comfortable. Attorneys will look at documents and email messages and then interview individuals. Forensic accounting professionals will look at the accounting and financial data (structured data). Some people are strong interviewers. The key is to consider all three data sources in unison.

Big data helps to make it all work together to bring the complete picture into focus. With the ever-increasing size of data sets, data analytics has never been more important or useful. Big data requires the use of creative and well-planned analytics due to its size and complexity. One of the main advantages of using data analytics in a big data environment is that it allows the investigator to analyze an entire population of data rather than having to choose a sample and risk drawing erroneous conclusions in the event of a sampling error.

To conduct an effective data analysis, a fraud examiner must take a comprehensive approach. Any direction can (and should) be taken when applying analytical tests to available data. The more creative fraudsters get in hiding their breach-related schemes, the more creative the fraud examiner must become in analyzing data to detect these schemes. For this reason, it is essential that fraud investigators consider both structured and unstructured data when planning their engagements.

Data are either structured or unstructured. Structured data is the type of data found in a database, consisting of recognizable and predictable structures. Examples of structured data include sales records, payment or expense details, and financial reports. Unstructured data, by contrast, is data not found in a traditional spreadsheet or database. Examples of unstructured data include vendor invoices, email and user documents, human resources files, social media activity, corporate document repositories, and news feeds. When using data analysis to conduct a fraud examination, the fraud examiner might use structured data, unstructured data, or a combination of the two. For example, conducting an analysis on email correspondence (unstructured data) among employees might turn up suspicious activity in the purchasing department. Upon closer inspection of the inventory records (structured data), the fraud examiner might uncover that an employee has been stealing inventory and covering her tracks in the record.

Recent reports of breach responses detailed in social media and the trade press indicate that those investigators deploying advanced forensic data analysis tools across larger data sets provided better insights into the penetration, which lead to more focused investigations, better root cause analysis and contributed to more effective fraud risk management. Advanced technologies that incorporate data visualization, statistical analysis and text-mining concepts, as compared to spreadsheets or relational database tools, can now be applied to massive data sets from disparate sources enhancing breach response at all organizational levels.

These technologies enable our client companies to ask new compliance questions of their data that they might not have been able to ask previously. Fraud examiners can establish important trends in business conduct or identify suspect transactions among millions of records rather than being forced to rely on smaller samplings that could miss important transactions.

Data breaches bring enhanced regulatory attention. It’s clear that data breaches have raised the bar on regulators’ expectations of the components of an effective compliance and anti-fraud program. Adopting big data/forensic data analysis procedures into the monitoring and testing of compliance can create a cycle of improved adherence to company policies and improved fraud prevention and detection, while providing additional comfort to key stakeholders.

CFEs and forensic accountants are increasingly being called upon to be members of teams implementing or expanding big data/forensic data analysis programs so as to more effectively manage data breaches and a host of other instances of internal and external fraud, waste and abuse. To build a successful big data/forensic data analysis program, your client companies would be well advised to:

— begin by focusing on the low-hanging fruit: the priority of the initial project(s) matters. The first and immediately subsequent projects, the low-hanging investigative fruit, normally incurs the largest cost associated with setting up the analytics infrastructure, so it’s important that the first few investigative projects yield tangible results/recoveries.

— go beyond usual the rule-based, descriptive analytics. One of the key goals of forensic data analysis is to increase the detection rate of internal control noncompliance while reducing the risk of false positives. From a technology perspective, client’s internal audit and other investigative groups need to move beyond rule-based spreadsheets and database applications and embrace both structured and unstructured data sources that include the use of data visualization, text-mining and statistical analysis tools.

— see that successes are communicated. Share information on early successes across divisional and departmental lines to gain broad business process support. Once validated, success stories will generate internal demand for the outputs of the forensic data analysis program. Try to construct a multi-disciplinary team, including information technology, business users (i.e., end-users of the analytics) and functional specialists (i.e., those involved in the design of the analytics and day-to-day operations of the forensic data analysis program). Communicate across multiple departments to keep key stakeholders assigned to the fraud prevention program updated on forensic data analysis progress under a defined governance program. Don’t just seek to report instances of noncompliance; seek to use the data to improve fraud prevention and response. Obtain investment incrementally based on success, and not by attempting to involve the entire client enterprise all at once.

—leadership support will gets the big data/forensic data analysis program funded, but regular interpretation of the results by experienced or trained professionals are what will make the program successful. Keep the analytics simple and intuitive; don’t try to cram too much information into any one report. Invest in new, updated versions of tools to make analytics sustainable. Develop and acquire staff professionals with the required skill sets to sustain and leverage the forensic data analysis effort over the long-term.
Finally, enterprise-wide deployment of forensic data analysis takes time; clients shouldn’t be lead to expect overnight adoption; an analytics integration is a journey, not a destination. Quick-hit projects might take four to six weeks, but the program and integration can take one to two years or more.

Our client companies need to look at a broader set of risks, incorporate more data sources, move away from lightweight, end-user, desktop tools and head toward real-time or near-real time analysis of increased data volumes. Organizations that embrace these potential areas for improvement can deliver more effective and efficient compliance programs that are highly focused on identifying and containing damage associated with hacker and other exploitation of key high fraud-risk business processes.

Needles & Haystacks

A long-time acquaintance of mine told me recently that, fresh out of the University of Virginia and new to forensic accounting, his first assignment consisted in searching, at the height of summer, through two unairconditioned trailers full of thousands of savings and loan records for what turned out to be just two documents critical to proving a loan fraud. He told me that he thought then that his job would always consist of finding needles in haystacks. Our profession and our tools have, thankfully, come a long way since then!

Today, digital analysis techniques afford the forensic investigator the ability to perform cost-effective financial forensic investigations. This is achieved through the following:

— The ability to test or analyze 100 percent of a data set, rather than merely sampling the data set.
–Massive amounts of data can be imported into working files, which allows for the processing of complex transactions and the profiling of certain case-specific characteristics.
–Anomalies within databases can be quickly identified, thereby reducing the number of transactions that require review and analysis.
–Digital analysis can be easily customized to address the scope of the engagement.

Overall, digital analysis can streamline investigations that involve a large number of transactions, often turning a needle-in-the-haystack search into a refined and efficient investigation. Digital analysis is not designed to replace the pick-and-shovel aspect of an investigation. However, the proper application of digital analysis will permit the forensic operator to efficiently identify those specific transactions that require further investigation or follow up.

As every CFE knows, there are an ever-growing number of software applications that can assist the forensic investigator with digital analysis. A few such examples are CaseWare International Inc.’s IDEA, ACL Services Ltd.’s ACL Desktop Edition, and the ActiveData plug-in, which can be added to Excel.

So, whether using the Internet in an investigation or using software to analyze data, fraud examiners can today rely heavily on technology to aid them in almost any investigation. More data is stored electronically than ever before; financial data, marketing data, customer data, vendor listings, sales transactions, email correspondence, and more, and evidence of fraud can be located within that data. Unfortunately, fraudulent data often looks like legitimate data when viewed in the raw. Taking a sample and testing it might or might not uncover evidence of fraudulent activity. Fortunately, fraud examiners now have the ability to sort through piles of information by using special software and data analysis techniques. These methods can identify future trends within a certain industry, and they can be configured to identify breaks in audit control programs and anomalies in accounting records.

In general, fraud examiners perform two primary functions to explore and analyze large amounts of data: data mining and data analysis. Data mining is the science of searching large volumes of data for patterns. Data analysis refers to any statistical process used to analyze data and draw conclusions from the findings. These terms are often used interchangeably.

If properly used, data analysis processes and techniques are powerful resources. They can systematically identify red flags and perform predictive modeling, detecting a fraudulent situation long before many traditional fraud investigation techniques would be able to do so.

Big data is now a buzzword in the worlds of business, audit, and fraud investigation. Big data are high volume, high velocity, and/or high variety information assets that require new forms of processing to enable enhanced decision making, insight discovery, and process optimization. Simply put, big data is information of extreme size, diversity, and complexity.

In addition to thinking of big data as a single set of data, fraud investigators should think about the way data grow when different data sets are connected together that might not normally be connected. Big data represents the continuous expansion of data sets, the size, variety, and speed of generation of which makes it difficult to manage and analyze.

Big data can be instrumental to fact gathering during an investigation. Distilled down to its core, how do fraud examiners gather data in an investigation? We look at documents and financial or operational data, and we interview people. The challenge is that people often gravitate to the areas with which they are most comfortable. Attorneys will look at documents and email messages and then interview individuals. Forensic accounting professionals will look at the accounting and financial data (structured data). Some people are strong interviewers. The key is to consider all three data sources in unison. Big data helps to make it all work together to tell the complete picture. With the ever-increasing size of data sets, data analytics has never been more important or useful. Big data requires the use of creative and well-planned analytics due to its size and complexity. One of the main advantages of using data analytics in a big data environment is, as indicated above, that it allows the investigator to analyze an entire population of data rather than having to choose a sample and risk drawing conclusions in the event of a sampling error.

To conduct an effective data analysis, a fraud examiner must take a comprehensive approach. Any direction can (and should) be taken when applying analytical tests to available data. The more creative fraudsters get in hiding their schemes, the more creative the fraud examiner must become in analyzing data to detect these schemes. For this reason, it is essential that fraud investigators consider both structured and unstructured data when planning their engagements.
Data are either structured or unstructured. Structured data is the type of data found in a database, consisting of recognizable and predictable structures. Examples of structured data include sales records, payment or expense details, and financial reports.

Unstructured data, by contrast, is data not found in a traditional spreadsheet or database. Examples of unstructured data include vendor invoices, email and user documents, human resources files, social media activity, corporate document repositories, and news feeds.

When using data analysis to conduct a fraud examination, the fraud examiner might use structured data, unstructured data, or a combination of the two. For example, conducting an analysis on email correspondence (unstructured data) among employees might turn up suspicious activity in the purchasing department. Upon closer inspection of the inventory records (structured data), the fraud examiner might uncover that an employee has been stealing inventory and covering her tracks in the records.

Data mining has roots in statistics, machine learning, data management and databases, pattern recognition, and artificial intelligence. All of these are concerned with certain aspects of data analysis, so they have much in common; yet they each have a distinct and individual flavor, emphasizing particular problems and types of solutions.

Although data mining technologies provide key advantages to marketing and business activities, they can also manipulate financial data that was previously hidden within a company’s database, enabling fraud examiners to detect potential fraud.

Data mining software provides an easy to use process that gives the fraud examiner the ability to get to data at a required level of detail. Data mining combines several different techniques essential to detecting fraud, including the streamlining of raw data into understandable patterns.

Data mining can also help prevent fraud before it happens. For example, computer manufacturers report that some of their customers use data mining tools and applications to develop anti-fraud models that score transactions in real-time. The scoring is customized for each business, involving factors such as locale and frequency of the order, and payment history, among others. Once a transaction is assigned a high-risk score, the merchant can decide whether to accept the transaction, deny it, or investigate further.

Often, companies use data warehouses to manage data for analysis. Data warehouses are repositories of a company’s electronic data designed to facilitate reporting and analysis. By storing data in a data warehouse, data users can query and analyze relevant data stored in a single location. Thus, a company with a data warehouse can perform various types of analytic operations (e.g., identifying red flags, transaction trends, patterns, or anomalies) to assist management with its decision making responsibilities.

In conclusion, after the fraud examiner has identified the data sources, s/he should identify how the information is stored by reviewing the database schema and technical documentation. Fraud examiners must be ready to face a number of pitfalls when attempting to identify how information is stored, from weak or nonexistent documentation to limited collaboration from the IT department.

Moreover, once collected, it’s critical to ensure that the data is complete and appropriate for the analysis to be performed. Depending on how the data was collected and processed, it could require some manual work to make it usable for analysis purposes; it might be necessary to modify certain field formats (e.g., date, time, or currency) to make the information usable.

Fraud Prevention Oriented Data Mining

One of the most useful components of our Chapter’s recently completed two-day seminar on Cyber Fraud & Data Breaches was our speaker, Cary Moore’s, observations on the fraud fighting potential of management’s creative use of data mining. For CFEs and forensic accountants, the benefits of data mining go much deeper than as just a tool to help our clients combat traditional fraud, waste and abuse. In its simplest form, data mining provides automated, continuous feedback to ensure that systems and anti-fraud related internal controls operate as intended and that transactions are processed in accordance with policies, laws and regulations. It can also provide our client managements with timely information that can permit a shift from traditional retrospective/detective activities to the proactive/preventive activities so important to today’s concept of what effective fraud prevention should be. Data mining can put the organization out front of potential fraud vulnerability problems, giving it an opportunity to act to avoid or mitigate the impact of negative events or financial irregularities.

Data mining tests can produce “red flags” that help identify the root cause of problems and allow actionable enhancements to systems, processes and internal controls that address systemic weaknesses. Applied appropriately, data mining tools enable organizations to realize important benefits, such as cost optimization, adoption of less costly business models, improved program, contract and payment management, and process hardening for fraud prevention.

In its most complex, modern form, data mining can be used to:

–Inform decision-making
–Provide predictive intelligence and trend analysis
–Support mission performance
–Improve governance capabilities, especially dynamic risk assessment
–Enhance oversight and transparency by targeting areas of highest value or fraud risk for increased scrutiny
–Reduce costs especially for areas that represent lower risk of irregularities
–Improve operating performance

Cary emphasized that leading, successful organizational implementers have tended to take a measured approach initially when embarking on a fraud prevention-oriented data mining initiative, starting small and focusing on particular “pain points” or areas of opportunity to tackle first, such as whether only eligible recipients are receiving program funds or targeting business processes that have previously experienced actual frauds. Through this approach, organizations can deliver quick wins to demonstrate an early return on investment and then build upon that success as they move to more sophisticated data mining applications.

So, according to ACFE guidance, what are the ingredients of a successful data mining program oriented toward fraud prevention? There are several steps, which should be helpful to any organization in setting up such an effort with fraud, waste, abuse identification/prevention in mind:

–Avoid problems by adopting commonly used data mining approaches and related tools.

This is essentially a cultural transformation for any organization that has either not understood the value these tools can bring or has viewed their implementation as someone else’s responsibility. Given the cyber fraud and breach related challenges faced by all types of organizations today, it should be easier for fraud examiners and forensic accountants to convince management of the need to use these tools to prevent problems and to improve the ability to focus on cost-effective means of better controlling fraud -related vulnerabilities.

–Understand the potential that data mining provides to the organization to support day to day management of fraud risk and strategic fraud prevention.

Understanding, both the value of data mining and how to use the results, is at the heart of effectively leveraging these tools. The CEO and corporate counsel can play an important educational and support role for a program that must ultimately be owned by line managers who have responsibility for their own programs and operations.

–Adopt a version of an enterprise risk management program (ERM) that includes a consideration of fraud risk.

An organization must thoroughly understand its risks and establish a risk appetite across the enterprise. In this way, it can focus on those area of highest value to the organization. An organization should take stock of its risks and ask itself fundamental questions, such as:

-What do we lose sleep over?
-What do we not want to hear about us on the evening news or read about in the print media or on a blog?
-What do we want to make sure happens and happens well?

Data mining can be an integral part of an overall program for enterprise risk management. Both are premised on establishing a risk appetite and incorporating a governance and reporting framework. This framework in turn helps ensure that day-to-day decisions are made in line with the risk appetite, and are supported by data needed to monitor, manage and alleviate risk to an acceptable level. The monitoring capabilities of data mining are fundamental to managing risk and focusing on issues of importance to the organization. The application of ERM concepts can provide a framework within which to anchor a fraud prevention program supported by effective data mining.

–Determine how your client is going to use the data mined information in managing the enterprise and safeguarding enterprise assets from fraud, waste and abuse.

Once an organization is on top of the data, using it effectively becomes paramount and should be considered as the information requirements are being developed. As Cary pointed out, getting the right data has been cited as being the top challenge by 20 percent of ACFE surveyed respondents, whereas 40 percent said the top challenge was the “lack of understanding of how to use analytics”. Developing a shared understanding so that everyone is on the same page is critical to success.

–Keep building and enhancing the application of data mining tools.

As indicated above, a tried and true approach is to begin with the lower hanging fruit, something that will get your client started and will provide an opportunity to learn on a smaller scale. The experience gained will help enable the expansion and the enhancement of data mining tools. While this may be done gradually, it should be a priority and not viewed as the “management reform initiative of the day. There should be a clear game plan for building data mining capabilities into the fiber of management’s fraud and breach prevention effort.

–Use data mining as a tool for accountability and compliance with the fraud prevention program.

It is important to hold managers accountable for not only helping institute robust data mining programs, but for the results of these programs. Has the client developed performance measures that clearly demonstrate the results of using these tools? Do they reward those managers who are in the forefront in implementing these tools? Do they make it clear to those who don’t that their resistance or hesitation are not acceptable?

–View this as a continuous process and not a “one and done” exercise.

Risks change over time. Fraudsters are always adjusting their targets and moving to exploit new and emerging weaknesses. They follow the money. Technology will continue to evolve, and it will both introduce new risks but also new opportunities and tools for management. This client management effort to protect against dangers and rectify errors is one that never ends, but also one that can pay benefits in preventing or managing cyber-attacks and breaches that far outweigh the costs if effectively and efficiently implemented.

In conclusion, the stark realities of today’s cyber related challenges at all levels of business, private and public, and the need to address ever rising service delivery expectations have raised the stakes for managing the cost of doing business and conducting the on-going war against fraud, waste and abuse. Today’s client-managers should want to be on top of problems before they become significant, and the strategic use of data mining tools can help them manage and protect their enterprises whilst saving money…a win/win opportunity for the client and for the CFE.

Analytics Confronts the Normal

The Information Audit and Control Association (ISACA) tells us that we produce and store more data in a day now than mankind did altogether in the last 2,000 years. The data that is produced daily is estimated to be one exabyte, which is the computer storage equivalent of one quintillion bytes, which is the same as one million terabytes. Not too long ago, about 15 years, a terabyte of data was considered a huge amount of data; today the latest Swiss Army knife comes with a 1 terabyte flash drive.

When an interaction with a business is complete, the information from the interaction is only as good as the pieces of data that get captured during that interaction. A customer walks into a bank and withdraws cash. The transaction that just happened gets stored as a monetary withdrawal transaction with certain characteristics in the form of associated data. There might be information on the date and time when the withdrawal happened; there may be information on which customer made the withdrawal (if there are multiple customers who operate the same account). The amount of cash that was withdrawn, the account from which the money was extracted, the teller/ATM who facilitated the withdrawal, the balance on the account after the withdrawal, and so forth, are all typically recorded. But these are just a few of the data elements that can get captured in any withdrawal transaction. Just imagine all the different interactions possible on all the assorted products that a bank has to offer: checking accounts, savings accounts, credit cards, debit cards, mortgage loans, home equity lines of credit, brokerage, and so on. The data that gets captured during all these interactions goes through data-checking processes and gets stored somewhere internally or in the cloud.  The data that gets stored this way has been steadily growing over the past few decades, and, most importantly for fraud examiners, most of this data carries tons of information about the nuances of the individual customers’ normal behavior.

In addition to what the customer does, from the same data, by looking at a different dimension of the data, examiners can also understand what is normal for certain other related entities. For example, by looking at all the customer withdrawals at a single ARM, CFEs can gain a good understanding of what is normal for that particular ATM terminal.  Understanding the normal behavior of customers is very useful in detecting fraud since deviation from normal behavior is a such a primary indicator of fraud. Understanding non-fraud or normal behavior is not only important at the main account holder level but also at all the entity levels associated with that individual account. The same data presents completely different information when observed in the context of one entity versus another. In this sense, having all the data saved and then analyzed and understood is a key element in tackling the fraud threat to any organization.

Any systematic, numbers-based system of understanding of the phenomenon of fraud as a past occurring event is dependent on an accurate description of exactly what happened through the data stream that got accumulated before, during, and after the fraud scenario occurred. Allowing the data to speak is the key to the success of any model-based system. This data needs to be saved and interpreted very precisely for the examiner’s models to make sense. The first crucial step to building a model is to define, understand, and interpret fraud scenarios correctly. At first glance, this seems like a very easy problem to solve. In practical terms, it is a lot more complicated process than it seems.

The level of understanding of the fraud episode or scenario itself varies greatly among the different business processes involved with handling the various products and functions within an organization. Typically, fraud can have a significant impact on the bottom line of any organization. Looking at the level of specific information that is systematically stored and analyzed about fraud in financial institutions for example, one would arrive at the conclusion that such storage needs to be a lot more systematic and rigorous than it typically is today. There are several factors influencing this. Unlike some of the other types of risk involved in client organizations, fraud risk is a censored problem. For example, if we are looking at serious delinquency, bankruptcy, or charge-off risk in credit card portfolios, the actual dollars-at-risk quantity is very well understood. Based on past data, it is relatively straightforward to quantify precise credit dollars at risk by looking at how many customers defaulted on a loan or didn’t pay their monthly bill for three or more cycles or declared bankruptcy. Based on this, it is easy to quantify the amount at risk as far as credit risk goes. However, in fraud, it is virtually impossible to quantify the actual amount that would have gone out the door as the fraud is stopped immediately after detection. The problem is censored as soon as some intervention takes place, making it difficult to precisely quantify the potential risk.

Another challenge in the process of quantifying fraud is how well the fraud episode itself gets recorded. Consider the case of a credit card number getting stolen without the physical card getting stolen. During a certain period, both the legitimate cardholder and the fraudster are charging using the card. If the fraud detection system in the issuing institution doesn’t identify the fraudulent transactions as they were happening in real time, typically fraud is identified when the cardholder gets the monthly statement and figures out that some of the charges were not made by him/her. Then the cardholder calls the issuer to report the fraud.  In the not too distant past, all that used to get recorded by the bank was the cardholder’s estimate of when the fraud episode began, even though there were additional details about the fraudulent transactions that were likely shared by the cardholder. If all that gets recorded is the cardholder’s estimate of when the fraud episode began, ambiguity is introduced regarding the granularity of the actual fraud episode. The initial estimate of the fraud amount becomes a rough estimate at best.
In the case in which the bank’s fraud detection system was able to catch the fraud during the actual fraud episode, the fraudulent transactions tended to be recorded by a fraud analyst, and sometimes not too accurately. If the transaction was marked as fraud or non-fraud incorrectly, this problem was typically not corrected even after the correct information flowed in. When eventually the transactions that were actually fraudulent were identified using the actual postings of the transactions, relating this back to the authorization transactions was often not a straightforward process. Sometimes the amounts of the transactions may have varied slightly. For example, the authorization transaction of a restaurant charge is sometimes unlikely to include the tip that the customer added to the bill. The posted amount when this transaction gets reconciled would look slightly different from the authorized amount. All of this poses an interesting challenge when designing a data-driven analytical system to combat fraud.

The level of accuracy associated with recording fraud data also tends to be dependent on whether the fraud loss is a liability for the customer or to the financial institution. To a significant extent, the answer to the question, “Whose loss is it?” really drives how well past fraud data is recorded. In the case of unsecured lending such as credit cards, most of the liability lies with the banks, and the banks tend to care a lot more about this type of loss. Hence systems are put in place to capture this data on a historical basis reasonably accurately.

In the case of secured lending, ID theft, and so on, a significant portion of the liability is really on the customer, and it is up to the customer to prove to the bank that he or she has been defrauded. Interestingly, this shift of liability also tends to have an impact on the quality of the fraud data captured. In the case of fraud associated with automated clearing house (ACH) batches and domestic and international wires, the problem is twofold: The fraud instances are very infrequent, making it impossible for the banks to have a uniform method of recording frauds; and the liability shifts are dependent on the geography.  Most international locations put the onus on the customer, while in the United States there is legislation requiring banks to have fraud detection systems in place.

The extent to which our client organizations take responsibility also tends to depend on how much they care about the customer who has been defrauded. When a very valuable customer complains about fraud on her account, a bank is likely to pay attention.  Given that most such frauds are not large scale, there is less need to establish elaborate systems to focus on and collect the data and keep track of past irregularities. The past fraud information is also influenced heavily by whether the fraud is third-party or first-party fraud. Third-party fraud is where the fraud is committed clearly by a third party, not the two parties involved in a transaction. In first-party fraud, the perpetrator of the fraud is the one who has the relationship with the bank. The fraudster in this case goes to great lengths to prevent the banks from knowing that fraud is happening. In this case, there is no reporting of the fraud by the customer. Until the bank figures out that fraud is going on, there is no data that can be collected. Also, such fraud could go on for quite a while and some of it might never be identified. This poses some interesting problems. Internal fraud where the employee of the institution is committing fraud could also take significantly longer to find. Hence the data on this tends to be scarce as well.

In summary, one of the most significant challenges in fraud analytics is to build a sufficient database of normal client transactions.  The normal transactions of any organization constitute the baseline from which abnormal, fraudulent or irregular transactions, can be identified and analyzed.  The pinpointing of the irregular is thus foundational to the development of the transaction processing edits which prevent the irregular transactions embodying fraud from even being processed and paid on the front end; furnishing the key to modern, analytically based fraud prevention.

Bye-Bye Money

Miranda had responsibility for preparing personnel files for new hires, approval of wages, verification of time cards, and distribution of payroll checks. She “hired” fictitious employees, faked their records, and ordered checks through the payroll system. She deposited some checks in several personal bank accounts and cashed others, endorsing all of them with the names of the fictitious employees and her own. Her company’s payroll function created a large paper trail of transactions among which were individual earnings records, W-2 tax forms, payroll deductions for taxes and insurance, and Form 941 payroll tax reports. She mailed all the W-2 forms to the same post office box.

Miranda stole $160,000 by creating some “ghosts,” usually 3 to 5 out of 112 people on the payroll and paying them an average of $650 per week for three years. Sometimes the ghosts quit and were later replaced by others. But she stole “only” about 2 percent of the payroll funds during the period.

A tip from a fellow employee received by the company hotline resulted in the engagement of Tom Hudson, CFE.  Tom’s objective was to obtain evidence of the existence and validity of payroll transactions on the control premise that different people should be responsible for hiring (preparing personnel files), approving wages, and distributing payroll checks. “Thinking like a crook” lead Tom to readily see that Miranda could put people on the payroll and obtain their checks just as the hotline caller alleged. In his test of controls Tom audited for transaction authorization and validity. In this case random sampling was less likely to work because of the small number of alleged ghosts. So, Tom looked for the obvious. He selected several weeks’ check blocks, accounted for numerical sequence (to see whether any checks had been removed), and examined canceled checks for two endorsements.

Tom reasoned that there may be no “balance” to audit for existence/occurrence, other than the accumulated total of payroll transactions, and that the total might not appear out of line with history because the tipster had indicated that the fraud was small in relation to total payroll and had been going on for years.  He decided to conduct a surprise payroll distribution, then followed up by examining prior canceled checks for the missing employees and then scan personnel files for common addresses.

Both the surprise distribution and the scan for common addresses quickly provided the names of 2 or 3 exceptions. Both led to prior canceled checks (which Miranda had not removed and the bank reconciler had not noticed), which carried Miranda’s own name as endorser. Confronted, she confessed.

The major risks in any payroll business cycle are:

•Paying fictitious “employees” (invalid transactions, employees do not exist);

• Overpaying for time or production (inaccurate transactions, improper valuation);

•Incorrect accounting for costs and expenses (incorrect classification, improper or inconsistent presentation and disclosure).

The assessment of payroll system control risk normally takes on added importance because most companies have fairly elaborate and well-controlled personnel and payroll functions. The transactions in this cycle are numerous during the year yet result in lesser amounts in balance sheet accounts at year-end. Therefore, in most routine outside auditor engagements, the review of controls, test of controls and audit of transaction details constitute the major portion of the evidence gathered for these accounts. On most annual audits, the substantive audit procedures devoted to auditing the payroll-related account balances are very limited which enhances fraud risk.

Control procedures for proper segregation of responsibilities should be in place and operating. Proper segregation involves authorization (personnel department hiring and firing, pay rate and deduction authorizations) by persons who do not have payroll preparation, paycheck distribution, or reconciliation duties. Payroll distribution (custody) is in the hands of persons who do not authorize employees’ pay rates or time, nor prepare the payroll checks. Recordkeeping is performed by payroll and cost accounting personnel who do not make authorizations or distribute pay. Combinations of two or more of the duties of authorization, payroll preparation and recordkeeping, and payroll distribution in one person, one office, or one computerized system may open the door for errors and frauds. In addition, the control system should provide for detail control checking activities.  For example: (1) periodic comparison of the payroll register to the personnel department files to check hiring authorizations and for terminated employees not deleted, (2) periodic rechecking of wage rate and deduction authorizations, (3) reconciliation of time and production paid to cost accounting calculations, (4) quarterly reconciliation of YTD earnings records with tax returns, and (5) payroll bank account reconciliation.

Payroll can amount to 40 percent or more of an organization’s total annual expenditures. Payroll taxes, Social Security, Medicare, pensions, and health insurance can add several percentage points in variable costs on top of wages. So, for every payroll dollar saved through forensic identification, bonus savings arise automatically from the on-top costs calculated on base wages. Different industries will exhibit different payroll risk profiles. For example, firms whose culture involves salaried employees who work longer hours may have a lower risk of payroll fraud and may not warrant a full forensic approach. Organizations may present greater opportunity for payroll fraud if their workforce patterns entail night shift work, variable shifts or hours, 24/7 on-call coverage, and employees who are mobile, unsupervised, or work across multiple locations. Payroll-related risks include over-claimed allowances, overused extra pay for weekend or public holiday work, fictitious overtime, vacation and sick leave taken but not deducted from leave balances, continued payment of employees who have left the organization, ghost employees arising from poor segregation of duties, and the vulnerability of data output to the bank for electronic payment, and roster dysfunction. Yet the personnel assigned to administer the complexities of payroll are often qualified by experience than by formal finance, legal, or systems training, thereby creating a competency bias over how payroll is managed. On top
of that, payroll is normally shrouded in secrecy because of the inherently private nature of employee and executive pay. Underpayment errors are less probable than overpayment errors because they are more likely to be corrected when the affected employees complain; they are less likely to be discovered when employees are overpaid. These systemic biases further increase the risk of unnoticed payroll error and fraud.

Payroll data analysis can reveal individuals or entire teams who are unusually well-remunerated because team supervisors turn a blind eye to payroll malpractice, as well as low-remunerated personnel who represent excellent value to the organization. For example, it can identify the night shift worker who is paid extra for weekend or holiday work plus overtime while actually working only half the contracted hours, or workers who claim higher duty or tool allowances to which they are not entitled. In addition to providing management with new insights into payroll behaviors, which may in turn become part of ongoing management reporting, the total payroll cost distribution analysis can point forensic accountants toward urgent payroll control improvements.

The detail inside payroll and personnel databases can reveal hidden information to the forensic examiner. Who are the highest earners of overtime pay and why? Which employees gained the most from weekend and public holiday pay? Who consistently starts late? Finishes early? Who has the most sick leave? Although most employees may perform a fair day’s work, the forensic analysis may point to those who work less, sometimes considerably less, than the time for which they are paid. Joined-up query combinations to search payroll and human resources data can generate powerful insights into the organization’s worst and best outliers, which may be overlooked by the data custodians. An example of a query combination would be: employees with high sick leave + high overtime + low performance appraisal scores + negative disciplinary records. Or, reviewers could invert those factors to find the unrecognized exemplary performers.

Where predication suggests fraud concerns about identified employees, CFEs can add value by triangulating time sheet claims against external data sources such as site access biometric data, company cell phone logs, phone number caller identification, GPS data, company email, Internet usage, company motor fleet vehicle tolls, and vehicle refueling data, most of which contain useful date and time-of-day parameters.  The data buried within these databases can reveal employee behavior, including what they were doing, where they were, and who they were interacting with throughout the work day.

Common findings include:

–Employees who leave work wrongfully during their shift;
–Employees who work fewer hours and take sick time during the week to shift the workload to weekends and public holidays to maximize pay;
–Employees who use company property excessively for personal purposes during working hours;
–Employees who visit vacation destinations while on sick leave;
–Employees who take leave but whose managers do not log the paperwork, thereby not deducting leave taken and overstating leave balances;
–Employees who moonlight in businesses on the side during normal working hours, sometimes using the organization’s equipment to do so.

Well-researched and documented forensic accounting fieldwork can support management action against those who may have defrauded the organization or work teams that may be taking inappropriate advantage of the payroll system. Simultaneously, CFEs and forensic accountants, working proactively, can partner with management to recover historic costs, quantify future savings, reduce reputational and political risk, improve the organization’s anti-fraud policies, and boost the productivity and morale of employees who knew of wrongdoing but felt powerless to stop it.

The Who, the What, the When

CFEs and forensic accountants are seekers. We spend our days searching for the most relevant information about our client requested investigations from an ever-growing and increasingly tangled data sphere and trying to make sense of it. Somewhere hidden in our client’s computers, networks, databases, and spreadsheets are signs of the alleged fraud, accompanying control weaknesses and unforeseen risks, as well as possible opportunities for improvement. And the more data the client organization has, the harder all this is to find.  Although most computer-assisted forensic audit tests focus on the numeric data contained within structured sources, such as financial and transactional databases, unstructured or text based data, such as e-mail, documents, and Web-based content, represents an estimated 8o percent of enterprise data within the typical medium to large-sized organization. When assessing written communications or correspondence about fraud related events, CFEs often find themselves limited to reading large volumes of data, with few automated tools to help synthesize, summarize, and cluster key information points to aid the investigation.

Text analytics is a relatively new investigative tool for CFEs in actual practice although some report having used it extensively for at least the last five or more years. According to the ACFE, the software itself stems from a combination of developments in our sister fields of litigation support and electronic discovery, and from counterterrorism and surveillance technology, as well as from customer relationship management, and research into the life sciences, specifically artificial intelligence. So, the application of text analytics in data review and criminal investigations dates to the mid-1990s.

Generally, CFEs increasingly use text analytics to examine three main elements of investigative data: the who, the what, and the when.

The Who: According to many recent studies, substantially more than a half of business people prefer using e-mail to use of the telephone. Most fraud related business transactions or events, then, will likely have at least some e-mail communication associated with them. Unlike telephone messages, e-mail contains rich metadata, information stored about the data, such as its author, origin, version, and date accessed, and can be documented easily. For example, to monitor who is communicating with whom in a targeted sales department, and conceivably to identify whether any alleged relationships therein might signal anomalous activity, a forensic accountant might wish to analyze metadata in the “to,” “from,” “cc,” or “bcc” fields in departmental e-mails. Many technologies for parsing e-mail with text analytics capabilities are available on the market today, some stemming from civil investigations and related electronic discovery software. These technologies are like the social network diagrams used in law enforcement or in counterterrorism efforts.

The What: The ever-present ambiguity inherent in human language presents significant challenges to the forensic investigator trying to understand the circumstances and actions surrounding the text based aspects of a fraud allegation. This difficulty is compounded by the tendency of people within organizations to invent their own words or to communicate in code. Language ambiguity can be illustrated by examining the word “shred”. A simple keyword search on the word might return not only documents that contain text about shredding a document, but also those where two sports fans are having a conversation about “shredding the defense,” or even e-mails between spouses about eating Chinese “shredded pork” for dinner. Hence, e-mail research analytics seeks to group similar documents according to their semantic context so that documents about shredding as concealment or related to covering up an action would be grouped separately from casual e-mails about sports or dinner, thus markedly reducing the volume of e-mail requiring more thorough ocular review. Concept-based analysis goes beyond traditional search technology by enabling users to group documents according to a statistical inference about the co-occurrence of similar words. In effect, text analytics software allows documents to describe themselves and group themselves by context, as in the shred example. Because text analytics examines document sets and identifies relationships between documents according to their context, it can produce far more relevant results than traditional simple keyword searches.

Using text analytics before filtering with keywords can be a powerful strategy for quickly understanding the content of a large corpus of unstructured, text-based data, and for determining what is relevant to the search. After viewing concepts at an elevated level, subsequent keyword selection becomes more effective by enabling users to better understand the possible code words or company-specific jargon. They can develop the keywords based on actual content, instead of guessing relevant terms, words, or phrases up front.

The When: In striving to understand the time frames in which key events took place, CFEs often need to not only identify the chronological order of documents (e.g., sorted by or limited to dates), but also link related communication threads, such as e-mails, so that similar threads and communications can be identified and plotted over time. A thread comprises a set of messages connected by various relationships; each message consists of either a first message or a reply to or forwarding of some other message in the set. Messages within a thread are connected by relationships that identify notable events, such as a reply vs. a forward, or changes in correspondents. Quite often, e-mails accumulate long threads with similar subject headings, authors, and message content over time. These threads ultimately may lead to a decision, such as approval to proceed with a project or to take some other action. The approval may be critical to understanding business events that led up to a particular journal entry. Seeing those threads mapped over time can be a powerful tool when trying to understand the business logic of a complex financial transaction.

In the context of fraud risk, text analytics can be particularly effective when threads and keyword hits are examined with a view to considering the familiar fraud triangle; the premise that all three components (incentive/pressure, opportunity, and rationalization) are present when fraud exists. This fraud triangle based analysis can be applied in a variety of business contexts where increases in the frequency of certain keywords related to incentive/pressure, opportunity, and rationalization, can indicate an increased level of fraud risk.

Some caveats are in order.  Considering the overwhelming amount of text-based data within any modern enterprise, assurance professionals could never hope to analyze all of it; nor should they. The exercise would prove expensive and provide little value. Just as an external auditor would not reprocess or validate every sales transaction in a sales journal, he or she would not need to look at every related e-mail from every employee. Instead, any professional auditor would take a risk-based approach, identifying areas to test based on a sample of data or on an enterprise risk assessment. For text analytics work, the reviewer may choose data from five or ten individuals to sample from a high-risk department or from a newly acquired business unit. And no matter how sophisticated the search and information retrieval tools used, there is no guarantee that all relevant or high-risk documents will be identified in large data collections. Moreover, different search methods may produce differing results, subject to a measure of statistical variation inherent in probability searches of any type. Just as a statistical sample of accounts receivable or accounts payable in the general ledger may not identify fraud, analytics reviews are similarly limited.

Text analytics can be a powerful fraud examination tool when integrated with traditional forensic data-gathering and analysis techniques such as interviews, independent research, and existing investigative tests involving structured, transactional data. For example, an anomaly identified in the general ledger related to the purchase of certain capital assets may prompt the examiner to review e-mail communication traffic among the key individuals involved, providing context around the circumstances and timing, of events before the entry date. Furthermore, the forensic accountant may conduct interviews or perform additional independent research that may support or conflict with his or her investigative hypothesis. Integrating all three of these components to gain a complete picture of the fraud event can yield valuable information. While text analytics should never replace the traditional rules-based analysis techniques that focus on the client’s financial accounting systems, it’s always equally important to consider the communications surrounding key events typically found in unstructured data, as opposed to that found in the financial systems.

Financing Death One BitCoin at a Time

Over the past decade, fanatic religious ideologists have evolved to become hybrid terrorists demonstrating exceptional versatility, innovation, opportunism, ruthlessness, and cruelty. Hybrid terrorists are a new breed of organized criminal. Merriam-Webster defines hybrid as “something that is formed by combining two or more things”. In the twentieth century, the military, intelligence forces, and law enforcement agencies each had a specialized skill-set to employ in response to respective crises involving insurgency, international terrorism, and organized crime. Military forces dealt solely with international insurgent threats to the government; intelligence forces dealt solely with international terrorism; and law enforcement agencies focused on their respective country’s organized crime entities. In the twenty-first century, greed, violence, and vengeance motivate the various groups of hybrid terrorists. Hybrid terrorists rely on organized crime such as money laundering, wire transfer fraud, drug and human trafficking, shell companies, and false identification to finance their organizational operations.

Last week’s horrific terror bombing in Manchester brings to the fore, yet again, the issue of such terrorist financing and the increasing role of forensic accountants in combating it. Two of the main tools of modern terror financing schemes are money laundering and virtual currency.

Law enforcement and government agencies in collaboration with forensic accountants play key roles in tracing the source of terrorist financing to the activities used to inflict terror on local and global citizens. Law enforcement agencies utilize investigative and predictive analytics tools to gather, dissect, and convey data to distinguish patterns leading to future terrorist events. Government agencies employ database inquiries of terrorist-related financial information to evaluate the possibilities of terrorist financing and activities. Forensic accountants review the data for patterns related to previous transactions by utilizing data analysis tools, which assist in tracking the source of the funds.

As we all know, forensic accountants use a combination of accounting knowledge combined with investigative skills in litigation support and investigative accounting settings. Several types of organizations, agencies, and companies frequently employ forensic accountants to provide investigative services. Some of these organizations are public accounting firms, law firms, law enforcement agencies, The Internal Revenue Service (IRS), The Central Intelligence Agency (CIA), and The Federal Bureau of Investigations (FBI).

Locating and halting the source of terrorist financing involves two tactics, following the money and drying up the money. Obstructing terrorist financing requires an understanding of both the original and supply source of the illicit funds. As the financing is derived from both legal and illegal funding sources, terrorists may attempt to evade detection by funneling money through legitimate businesses thus making it difficult to trace. Charitable organizations and reputable companies provide a legitimate source through which terrorists may pass money for illicit activities without drawing the attention of law enforcement agencies. Patrons of legitimate businesses are often unaware that their personal contributions may support terrorist activities. However, terrorists also obtain funds from obvious illegal sources, such as kidnapping, fraud, and drug trafficking. Terrorists often change daily routines to evade law enforcement agencies as predictable patterns create trails that are easy for skilled investigators to follow. Audit trails can be traced from the donor source to the terrorist by forensic accountants and law enforcement agencies tracking specific indicators. Audit trails reveal where the funds originate and whether the funds came from legal or illegal sources. The ACFE tells us that basic money laundering is a specific type of illegal funding source, which provides a clear audit trail.

Money laundering is the process of obtaining and funneling illicit funds to disguise the connection with the original unlawful activity. Terrorists launder money to spend the unlawfully obtained money without drawing attention to themselves and their activities. To remain undetected by regulatory authorities, the illicit funds being deposited or spent need to be washed to give the impression that the money came from a seemingly reputable source. There are types of unusual transactions that raise red flags associated with money laundering in financial institutions. The more times an unusual transaction occurs, the greater the probability it is the product of an illicit activity. Money laundering may be quite sophisticated depending on the strategies employed to avoid detection. Some identifiers indicating a possible money-laundering scheme are: lack of identification, money wired to new locations, customer closes account after wiring or transferring copious amounts of money, executed out-of-the-ordinary business transactions, executed transactions involving the customer’s own business or occupation, and executed transactions falling just below the threshold trigger requiring the financial institution to file a report.

Money laundering takes place in three stages: placement, layering, and integration. In the placement stage, the cash proceeds from criminal activity enter the financial system by deposit. During the layering stage, the funds transfer into other accounts, usually offshore financial institutions, thus creating greater distance between the source and origin of the funds and its current location. Legitimate purchases help funnel the money back into the economy during the integration stage, the final stage.

Complicating all this is for the investigator is virtual currency. Virtual currency, unlike traditional forms of money, does not leave a clear audit trail for forensic accountants to trace and investigate. Cases involving the use of virtual currency, i.e. Bitcoins and several rival currencies, create anonymity for the perpetrator and create obstacles for investigators. Bitcoins have no physical form and provide a unique opportunity for terrorists to launder money across international borders without detection by law enforcement or government agencies. Bitcoins are long strings of numbers and letters linked by mathematical encryption algorithms. A consumer uses a mobile phone or computer to create an online wallet with one or more Bitcoin addresses before commencing electronic transactions. Bitcoins may also be used to make legitimate purchases through various, established online retailers.

Current international anti-money laundering laws aid in fighting the war against terrorist financing; however, international laws require actual cash shipments between countries and criminal networks (or at the very least funds transfers between banks). International laws are not applicable to virtual currency transactions, as they do not consist of actual cash shipments. According to the website Bitcoin.org, “Bitcoin uses peer-to-peer technology to operate with no central authority or banks”.

In summary, terrorist organizations find virtual currency to be an effective method for raising illicit funds because, unlike cash transactions, cyber technology offers anonymity with less regulatory oversight. Due to the anonymity factor, Bitcoins are an innovative and convenient way for terrorists to launder money and sell illegal goods. Virtual currencies are appealing for terrorist financiers since funds can be swiftly sent across borders in a secure, cheap, and highly secretive manner. The obscurity of Bitcoin allows international funding sources to conduct exchanges without a trace of evidence. This co-mingling effect is like traditional money laundering but without the regulatory oversight. Government and law enforcement agencies must, as a result, be able to share information with public regulators when they become suspicious of terrorist financing.

Forensic accounting technology is most beneficial when used in conjunction with the analysis tools of law enforcement agencies to predict and analyze future terrorist activity. Even though some of the tools in a forensic accountant’s arsenal are useful in tracking terrorist funds, the ability to identify conceivable terrorist threats is limited. To identify the future activities of terrorist groups, forensic accountants, and law enforcement agencies should cooperate with one another by mutually incorporating the analytical tools utilized by each. Agencies and government officials should become familiar with virtual currency like Bitcoins. Because of the anonymity and lack of regulatory oversight, virtual currency offers terrorist groups a useful means to finance illicit activities on an international scale. In the face of the challenge, new governmental entities may be needed to tie together all the financial forensics efforts of the different stake holder organizations so that information sharing is not compartmentalized.

RVACFES May 2017 Event Sold-Out!

On May 17th and 18th the Central Virginia ACFE Chapter and our partners, the Virginia State Police and the Association of Certified Fraud Examiners (ACFE) were joined by an over-flow crowd of audit and assurance professionals for the ACFE’s training course ‘Conducting Internal Investigations’. The sold-out May 2017 seminar was the ninth that our Chapter has hosted over the years with the Virginia State Police utilizing a distinguished list of certified ACFE instructor-practitioners.

Our internationally acclaimed instructor for the May seminar was Gerard Zack, CFE, CPA, CIA, CCEP. Gerry has provided fraud prevention and investigation, forensic accounting, and internal and external audit services for more than 30 years. He has worked with commercial businesses, not-for-profit organizations, and government agencies throughout North America and Europe. Prior to starting his own practice in 1990, Gerry was an audit manager with a large international public accounting firm. As founder and president of Zack, P.C., he has led numerous fraud investigations and designed customized fraud risk management programs for a diverse client base. Through Zack, P.C., he also provides outsourced internal audit services, compliance and ethics programs, enterprise risk management, fraud risk assessments, and internal control consulting services.

Gerry is a Certified Fraud Examiner (CFE) and Certified Public Accountant (CPA) and has focused most of his career on audit and fraud-related services. Gerry serves on the faculty of the Association of Certified Fraud Examiners (ACFE) and is the 2009 recipient of the ACFE’s James Baker Speaker of the Year Award. He is also a Certified Internal Auditor (CIA) and a Certified Compliance and Ethics Professional (CCEP).

Gerry is the author of Financial Statement Fraud: Strategies for Detection and Investigation (published 2013 by John Wiley & Sons), Fair Value Accounting Fraud: New Global Risks and Detection Techniques (2009 by John Wiley & Sons), and Fraud and Abuse in Nonprofit Organizations: A Guide to Prevention and Detection (2003 by John Wiley & Sons). He is also the author of numerous articles on fraud and teaches seminars on fraud prevention and detection for businesses, government agencies, and nonprofit organizations. He has provided customized internal staff training on specialized auditing issues, including fraud detection in audits, for more than 50 CPA firms.

Gerry is also the founder of the Nonprofit Resource Center, through which he provides antifraud training and consulting and online financial management tools specifically geared toward the unique internal control and financial management needs of nonprofit organizations. Gerry earned his M.B.A at Loyola University in Maryland and his B.S.B.A at Shippensburg University of Pennsylvania.

To some degree, organizations of every size, in every industry, and in every city, experience internal fraud. No entity is immune. Furthermore, any member of an organization can carry out fraud, whether it is committed by the newest customer service employee or by an experienced and highly respected member of upper management. The fundamental reason for this is that fraud is a human problem, not an accounting problem. As long as organizations are employing individuals to perform business functions, the risk of fraud exists.

While some organizations aggressively adopt strong zero tolerance anti-fraud policies, others simply view fraud as a cost of doing business. Despite varying views on the prevalence of, or susceptibility to, fraud within a given organization, all must be prepared to conduct a thorough internal investigation once fraud is suspected. Our ‘Conducting Internal Investigations’ event was structured around the process of investigating any suspected fraud from inception to final disposition and beyond.

What constitutes an act that warrants an examination can vary from one organization to another and from jurisdiction to jurisdiction. It is often resolved based on a definition of fraud adopted by an employer or by a government agency. There are numerous definitions of fraud, but a popular example comes from the joint ACFE-COSO publication, Fraud Risk Management Guide:

Fraud is any intentional act or omission designed to deceive others, resulting in the victim suffering a loss and/or the perpetrator achieving a gain.

However, many law enforcement agencies have developed their own definitions, which might be more appropriate for organizations operating in their jurisdictions. Consequently, fraud examiners should determine the appropriate legal definition in the jurisdiction in which the suspected offense was committed.

Fraud examination is a methodology for resolving fraud allegations from inception to disposition. More specifically, fraud examination involves:

–Assisting in the detection and prevention of fraud;
–Initiating the internal investigation;
–Obtaining evidence and taking statements;
–Writing reports;
–Testifying to findings.

A well run internal investigation can enhance a company’s overall well-being and can help detect the source of lost funds, identify responsible parties and recover losses. It can also provide a defense to legal charges by terminated or disgruntled employees. But perhaps, most importantly, an internal investigation can signal to every company employee that the company will not tolerate fraud.

Our two-day seminar agenda included Gerry’s in depth look at the following topics:

–Assessment of the risk of fraud within an organization and responding when it is identified;
–Detection and investigation of internal frauds with the use of data analytics;
–The collection of documents and electronic evidence needed during an investigation;
–The performance of effective information gathering and admission seeking interviews;
–The wide variety of legal and regulatory concerns related to internal investigations.

Gerry did his usual tremendous job in preparing the professionals in attendance to deal with every step in an internal fraud investigation, from receiving the initial allegation to testifying as a witness. The participants learned to lead an internal investigation with accuracy and confidence by gaining knowledge about topics such as the relevant legal aspects impacting internal investigations, the use of computers and analytics during the investigation, collecting and analyzing internal and external information, and interviewing witnesses and the writing of effective reports.

Industrialized Theft

In at least one way you have to hand it to Ethically Challenged, Inc.;  it sure knows how to innovate, and the recent spate of ransomware attacks proves they also know how to make what’s old new again. Although society’s criminal opponents engage in constant business process improvement, they’ve proven again and again that they’re not just limited to committing new crimes from scratch every time. In the age of Moore’s law, these tasks have been readily automated and can run in the background at scale without the need for significant human intervention. Crime automations like the WannaCry virus allow transnational organized crime groups to gain the same efficiencies and cost savings that multinational corporations obtained by leveraging technology to carry out their core business functions. That’s why today it’s possible for hackers to rob not just one person at a time but 100 million or more, as the world saw with the Sony PlayStation and Target data breaches and now with the WannaCry worm.

As covered in our Chapter’s training event of last year, ‘Investigating on the Internet’, exploit tool kits like Blackhole and SpyEye commit crime “automagically” by minimizing the need for human labor, thereby dramatically reducing criminal costs. They also allow hackers to pursue the “long tail” of opportunity, committing millions of thefts in small amounts so that (in many cases) victims don’t report them and law enforcement has no way to track them. While high-value targets (companies, nations, celebrities, high-net-worth individuals) are specifically and individually targeted, the way the majority of the public is hacked is by automated scripted computer malware, one large digital fishing net that scoops up anything and everything online with a vulnerability that can be exploited. Given these obvious advantages, as of 2016 an estimated 61 percent of all online attacks were launched by fully automated crime tool kits, returning phenomenal profits for the Dark Web overlords who expertly orchestrated them. Modern crime has become reduced and distilled to a software program that anybody can run at tremendous profit.

Not only can botnets and other tools be used over and over to attack and offend, but they’re now enabling the commission of much more sophisticated crimes such as extortion, blackmail, and shakedown rackets. In an updated version of the old $500 million Ukrainian Innovative Marketing solutions “virus detected” scam, fraudsters have unleashed a new torrent of malware that hold the victim’s computer hostage until a ransom is paid and an unlock code is provided by the scammer to regain access to the victim’s own files. Ransomware attack tools are included in a variety of Dark Net tool kits, such as WannaCry and Gameover Zeus. According to the ACFE, there are several varieties of this scam, including one that purports to come from law enforcement. Around the world, users who become infected with the Reveton Trojan suddenly have their computers lock up and their full screens covered with a notice, allegedly from the FBI. The message, bearing an official-looking large, full-color FBI logo, states that the user’s computer has been locked for reasons such as “violation of the federal copyright law against illegally downloaded material” or because “you have been viewing or distributing prohibited pornographic content.”

In the case of the Reveton Trojan, to unlock their computers, users are informed that they must pay a fine ranging from $200 to $400, only accepted using a prepaid voucher from Green Dot’s MoneyPak, which victims are instructed they can buy at their local Walmart or CVS; victims of WannaCry are required to pay in BitCoin. To further intimidate victims and drive home the fact that this is a serious police matter, the Reveton scammers prominently display the alleged violator’s IP address on their screen as well as snippets of video footage previously captured from the victim’s Webcam. As with the current WannaCry exploit, the Reveton scam has successfully targeted tens of thousands of victims around the world, with the attack localized by country, language, and police agency. Thus, users in the U.K. see a notice from Scotland Yard, other Europeans get a warning from Europol, and victims in the United Arab Emirates see the threat, translated into Arabic, purportedly from the Abu Dhabi Police HQ.

WannaCry is even more pernicious than Reveton though in that it actually encrypts all the files on a victim’s computer so that they can no longer be read or accessed. Alarmingly, variants of this type of malware often present a ticking-bomb-type countdown clock advising users that they only have forty-eight hours to pay $300 or all of their files will be permanently destroyed. Akin to threatening “if you ever want to see your files alive again,” these ransomware programs gladly accept payment in Bitcoin. The message to these victims is no idle threat. Whereas previous ransomware might trick users by temporarily hiding their files, newer variants use strong 256-bit Advanced Encryption Standard cryptography to lock user files so that they become irrecoverable. These types of exploits earn scores of millions of dollars for the criminal programmers who develop and sell them on-line to other criminals.

Automated ransomware tools have even migrated to mobile phones, affecting Android handset users in certain countries. Not only have individuals been harmed by the ransomware scourge, so too have companies, nonprofits, and even government agencies, the most infamous of which was the Swansea Police Department in Massachusetts some years back, which became infected when an employee opened a malicious e-mail attachment. Rather than losing its irreplaceable police case files to the scammers, the agency was forced to open a Bitcoin account and pay a $750 ransom to get its files back. The police lieutenant told the press he had no idea what a Bitcoin was or how the malware functioned until his department was struck in the attack.

As the ACFE and other professional organizations have told us, within its world, cybercrime has evolved highly sophisticated methods of operation to sell everything from methamphetamine to child sexual abuse live streamed online. It has rapidly adopted existing tools of anonymity such as the Tor browser to establish Dark Net shopping malls, and criminal consulting services such as hacking and murder for hire are all available at the click of a mouse. Untraceable and anonymous digital currencies, such as Bitcoin, are breathing new life into the underground economy and allowing for the rapid exchange of goods and services. With these additional revenues, cyber criminals are becoming more disciplined and organized, significantly increasing the sophistication of their operations. Business models are being automated wherever possible to maximize profits and botnets can threaten legitimate global commerce, easily trained on any target of the scammer’s choosing. Fundamentally, it’s been done. As WannaCry demonstrates, the computing and Internet based crime machine has been built. With these systems in place, the depth and global reach of cybercrime, mean that crime now scales, and it scales exponentially. Yet, as bad as this threat is today, it is about to become much worse, as we hand such scammers billions of more targets for them to attack as we enter the age of ubiquitous computing and the Internet of Things.