Category Archives: Data Retention Policy

Analytics Confronts the Normal

The Information Audit and Control Association (ISACA) tells us that we produce and store more data in a day now than mankind did altogether in the last 2,000 years. The data that is produced daily is estimated to be one exabyte, which is the computer storage equivalent of one quintillion bytes, which is the same as one million terabytes. Not too long ago, about 15 years, a terabyte of data was considered a huge amount of data; today the latest Swiss Army knife comes with a 1 terabyte flash drive.

When an interaction with a business is complete, the information from the interaction is only as good as the pieces of data that get captured during that interaction. A customer walks into a bank and withdraws cash. The transaction that just happened gets stored as a monetary withdrawal transaction with certain characteristics in the form of associated data. There might be information on the date and time when the withdrawal happened; there may be information on which customer made the withdrawal (if there are multiple customers who operate the same account). The amount of cash that was withdrawn, the account from which the money was extracted, the teller/ATM who facilitated the withdrawal, the balance on the account after the withdrawal, and so forth, are all typically recorded. But these are just a few of the data elements that can get captured in any withdrawal transaction. Just imagine all the different interactions possible on all the assorted products that a bank has to offer: checking accounts, savings accounts, credit cards, debit cards, mortgage loans, home equity lines of credit, brokerage, and so on. The data that gets captured during all these interactions goes through data-checking processes and gets stored somewhere internally or in the cloud.  The data that gets stored this way has been steadily growing over the past few decades, and, most importantly for fraud examiners, most of this data carries tons of information about the nuances of the individual customers’ normal behavior.

In addition to what the customer does, from the same data, by looking at a different dimension of the data, examiners can also understand what is normal for certain other related entities. For example, by looking at all the customer withdrawals at a single ARM, CFEs can gain a good understanding of what is normal for that particular ATM terminal.  Understanding the normal behavior of customers is very useful in detecting fraud since deviation from normal behavior is a such a primary indicator of fraud. Understanding non-fraud or normal behavior is not only important at the main account holder level but also at all the entity levels associated with that individual account. The same data presents completely different information when observed in the context of one entity versus another. In this sense, having all the data saved and then analyzed and understood is a key element in tackling the fraud threat to any organization.

Any systematic, numbers-based system of understanding of the phenomenon of fraud as a past occurring event is dependent on an accurate description of exactly what happened through the data stream that got accumulated before, during, and after the fraud scenario occurred. Allowing the data to speak is the key to the success of any model-based system. This data needs to be saved and interpreted very precisely for the examiner’s models to make sense. The first crucial step to building a model is to define, understand, and interpret fraud scenarios correctly. At first glance, this seems like a very easy problem to solve. In practical terms, it is a lot more complicated process than it seems.

The level of understanding of the fraud episode or scenario itself varies greatly among the different business processes involved with handling the various products and functions within an organization. Typically, fraud can have a significant impact on the bottom line of any organization. Looking at the level of specific information that is systematically stored and analyzed about fraud in financial institutions for example, one would arrive at the conclusion that such storage needs to be a lot more systematic and rigorous than it typically is today. There are several factors influencing this. Unlike some of the other types of risk involved in client organizations, fraud risk is a censored problem. For example, if we are looking at serious delinquency, bankruptcy, or charge-off risk in credit card portfolios, the actual dollars-at-risk quantity is very well understood. Based on past data, it is relatively straightforward to quantify precise credit dollars at risk by looking at how many customers defaulted on a loan or didn’t pay their monthly bill for three or more cycles or declared bankruptcy. Based on this, it is easy to quantify the amount at risk as far as credit risk goes. However, in fraud, it is virtually impossible to quantify the actual amount that would have gone out the door as the fraud is stopped immediately after detection. The problem is censored as soon as some intervention takes place, making it difficult to precisely quantify the potential risk.

Another challenge in the process of quantifying fraud is how well the fraud episode itself gets recorded. Consider the case of a credit card number getting stolen without the physical card getting stolen. During a certain period, both the legitimate cardholder and the fraudster are charging using the card. If the fraud detection system in the issuing institution doesn’t identify the fraudulent transactions as they were happening in real time, typically fraud is identified when the cardholder gets the monthly statement and figures out that some of the charges were not made by him/her. Then the cardholder calls the issuer to report the fraud.  In the not too distant past, all that used to get recorded by the bank was the cardholder’s estimate of when the fraud episode began, even though there were additional details about the fraudulent transactions that were likely shared by the cardholder. If all that gets recorded is the cardholder’s estimate of when the fraud episode began, ambiguity is introduced regarding the granularity of the actual fraud episode. The initial estimate of the fraud amount becomes a rough estimate at best.
In the case in which the bank’s fraud detection system was able to catch the fraud during the actual fraud episode, the fraudulent transactions tended to be recorded by a fraud analyst, and sometimes not too accurately. If the transaction was marked as fraud or non-fraud incorrectly, this problem was typically not corrected even after the correct information flowed in. When eventually the transactions that were actually fraudulent were identified using the actual postings of the transactions, relating this back to the authorization transactions was often not a straightforward process. Sometimes the amounts of the transactions may have varied slightly. For example, the authorization transaction of a restaurant charge is sometimes unlikely to include the tip that the customer added to the bill. The posted amount when this transaction gets reconciled would look slightly different from the authorized amount. All of this poses an interesting challenge when designing a data-driven analytical system to combat fraud.

The level of accuracy associated with recording fraud data also tends to be dependent on whether the fraud loss is a liability for the customer or to the financial institution. To a significant extent, the answer to the question, “Whose loss is it?” really drives how well past fraud data is recorded. In the case of unsecured lending such as credit cards, most of the liability lies with the banks, and the banks tend to care a lot more about this type of loss. Hence systems are put in place to capture this data on a historical basis reasonably accurately.

In the case of secured lending, ID theft, and so on, a significant portion of the liability is really on the customer, and it is up to the customer to prove to the bank that he or she has been defrauded. Interestingly, this shift of liability also tends to have an impact on the quality of the fraud data captured. In the case of fraud associated with automated clearing house (ACH) batches and domestic and international wires, the problem is twofold: The fraud instances are very infrequent, making it impossible for the banks to have a uniform method of recording frauds; and the liability shifts are dependent on the geography.  Most international locations put the onus on the customer, while in the United States there is legislation requiring banks to have fraud detection systems in place.

The extent to which our client organizations take responsibility also tends to depend on how much they care about the customer who has been defrauded. When a very valuable customer complains about fraud on her account, a bank is likely to pay attention.  Given that most such frauds are not large scale, there is less need to establish elaborate systems to focus on and collect the data and keep track of past irregularities. The past fraud information is also influenced heavily by whether the fraud is third-party or first-party fraud. Third-party fraud is where the fraud is committed clearly by a third party, not the two parties involved in a transaction. In first-party fraud, the perpetrator of the fraud is the one who has the relationship with the bank. The fraudster in this case goes to great lengths to prevent the banks from knowing that fraud is happening. In this case, there is no reporting of the fraud by the customer. Until the bank figures out that fraud is going on, there is no data that can be collected. Also, such fraud could go on for quite a while and some of it might never be identified. This poses some interesting problems. Internal fraud where the employee of the institution is committing fraud could also take significantly longer to find. Hence the data on this tends to be scarce as well.

In summary, one of the most significant challenges in fraud analytics is to build a sufficient database of normal client transactions.  The normal transactions of any organization constitute the baseline from which abnormal, fraudulent or irregular transactions, can be identified and analyzed.  The pinpointing of the irregular is thus foundational to the development of the transaction processing edits which prevent the irregular transactions embodying fraud from even being processed and paid on the front end; furnishing the key to modern, analytically based fraud prevention.

From Inside the Building

By Rumbi Petrozzello, CFE, CPA/CFF
2017 Vice-President – Central Virginia Chapter ACFE

Several months ago, I attended an ACFE session where one of the speakers had worked on the investigation of Edward Snowden. He shared that one of the ways Snowden had gained access to some of the National Security Agency (NSA) data that he downloaded was through the inadvertent assistance of his supervisor. According to this investigator, Snowden’s supervisor shared his password with Snowden, giving Snowden access to information that was beyond his subordinate’s level of authorization. In addition to this, when those security personnel reviewing downloads made by employees noticed that Snowden was downloading copious amounts of data, they approached Snowden’s supervisor to question why this might be the case. The supervisor, while acknowledging this to be true, stated that Snowden wasn’t really doing anything untoward.

At another ACFE session, a speaker shared information with us about how Chelsea Manning was able to download and remove data from a secure government facility. Manning would come to work, wearing headphones, listening to music on a Discman. Security would hear the music blasting and scan the CDs. Day after day, it was the same scenario. Manning showed up to work, music blaring.  Security staff grew so accustomed to Manning, the Discman and her CDs that when she came to work though security with a blank CD boldly labelled “LADY GAGA”, security didn’t blink. They should have because it was that CD and ones like it that she later carried home from work that contained the data she eventually shared with WikiLeaks.

Both these high-profile disasters are notable examples of the bad outcome arising from a realized internal threat. Both Snowden and Manning worked for organizations that had, and have, more rigorous security procedures and policies in place than most entities. Yet, both Snowden and Manning did not need to perform any magic tricks to sneak data out of the secure sites where the target data was held; it seems that it all it took was audacity on the one side and trust and complacency on the other.

When organizations deal with outside parties, such as vendors and customers, they tend to spend a lot of time setting up the structures and systems that will guide how the organization will interact with those vendors and customers. Generally, companies will take these systems of control seriously, if only because of the problems they will have to deal with during annual external audits if they don’t. The typical new employee will spend a lot of time learning what the steps are from the point when a customer places an order through to the point the customer’s payment is received. There will be countless training manuals to which to refer and many a reminder from co-workers who may be negatively impacted if the rooky screws up.

However, this scenario tends not to hold up when it comes to how employees typically share information and interact with each other. This is true despite the elevated risk that a rogue insider represents. Often, when we think about an insider causing harm to a company through fraudulent acts, we tend to imagine a villain, someone we could identify easily because s/he is obviously a terrible person. After all, only a terrible person could defraud their employer. In fact, as the ACFE tells us, the most successful fraudsters are the ones who gain our trust and who, therefore, don’t really have to do too much for us to hand over the keys to the kingdom. As CFEs and Forensic Accountants, we need to help those we work with understand the risks that an insider threat can represent and how to mitigate that risk. It’s important, in advising our clients, to guide them toward the creation of preventative systems of policy and procedure that they sometimes tend to view as too onerous for their employees. Excuses I often hear run along the lines of:

• “Our employees are like family here, we don’t need to have all these rules and regulations”

• “I keep a close eye on things, so I don’t have to worry about all that”

• “My staff knows what they are supposed to do; don’t worry about it.”

Now, if people can easily walk sensitive information out of locations that have documented systems and are known to be high security operations, can you imagine what they can do at your client organizations? Especially if the employer is assuming that their employees magically know what they are supposed to do? This is the point that we should be driving home with our clients. We should look to address the fact that both trust and complacency in organizations can be problems as well as assets. It’s great to be able to trust employees, but we should also talk to our clients about the fraud triangle and how one aspect of it, pressure, can happen to any staff member, even the most trusted. With that in mind, it’s important to institute controls so that, should pressure arise with an employee, there will be little opportunity open to that employee to act. Both Manning and Snowden have publicly spoken about the pressures they felt that led them to act in the way they did. The reason we even know about them today is that they had the opportunity to act on those pressures. I’ve spent time consulting with large organizations, often for months at a time. During those times, I got to chat with many members of staff, including security. On a couple of occasions, I forgot and left my building pass at home. Even though I was on a first name basis with the security staff and had spent time chatting with them about our personal lives, they still asked me for identification and looked me up in the system. I’m sure they thought I was a nice and trustworthy enough person, but they knew to follow procedures and always checked on whether I was still authorized to access the building. The important point is that they, despite knowing me, knew to check and followed through.

Examples of controls employees should be reminded to follow are:

• Don’t share your password with a fellow employee. If that employee cannot access certain information with their own password, either they are not authorized to access that information or they should speak with an administrator to gain the desired access. Sharing a password seems like a quick and easy solution when under time pressures at work, but remind employees that when they share their login information, anything that goes awry will be attributed to them.

• Always follow procedures. Someone looking for an opportunity only needs one.

• When something looks amiss, thoroughly investigate it. Even if someone tells you that all is well, verify that this is indeed the case.

• Explain to staff and management why a specific control is in place and why it’s important. If they understand why they are doing something, they are more likely to see the control as useful and to apply it.

• Schedule training on a regular basis to remind staff of the controls in place and the systems they are to follow. You may believe that staff knows what they are supposed to do, but reminding them reduces the risk of them relying on hearsay and secondhand information. Management is often surprised by what they think staff knows and what they find out the staff really knows.

It should be clear to your clients that they have control over who has access to sensitive information and when and how it leaves their control. It doesn’t take much for an insider to gain access to this information. A face you see smiling at you daily is the face of a person you can grow comfortable with and with whom you can drop your guard. However, if you already have an adequate system and effective controls in place, you take the personal out of the equation and everyone understands that we are all just doing our job.

Where the Money Is

bank-robberyOne of the followers of our Central Virginia Chapter’s group on LinkedIn is a bank auditor heavily engaged in his organization’s analytics based fraud control program.  He was kind enough to share some of his thoughts regarding his organization’s sophisticated anti-fraud data modelling program as material for this blog post.

Our LinkedIn connection reports that, in his opinion, getting fraud data accurately captured, categorized, and stored is the first, vitally important challenge to using data-driven technology to combat fraud losses. This might seem relatively easy to those not directly involved in the process but, experience quickly reveals that having fraud related data stored reliably over a long period of time and in a readily accessible format represents a significant challenge requiring a systematic approach at all levels of any organization serious about the effective application of analytically supported fraud management. The idea of any single piece of data being of potential importance to addressing a problem is a relatively new concept in the history of banking and of most other types of financial enterprises.

Accumulating accurate data starts with an overall vision of how the multiple steps in the process connect to affect the outcome. It’s important for every member of the fraud control team to understand how important each process pre-defined step is in capturing the information correctly — from the person who is responsible for risk management in the organization to the people who run the fraud analytics program to the person who designs the data layout to the person who enters the data. Even a customer service analyst or a fraud analyst not marking a certain type of transaction correctly as fraud can have an on-going impact on developing an accurate fraud control system. It really helps to establish rigorous processes of data entry on the front end and to explain to all players exactly why those specific processes are in place. Process without communication and communication without process both are unlikely to produce desirable results. In order to understand the importance of recording fraud information correctly, it’s important for management to communicate to all some general understanding about how a data-driven detection system (whether it’s based on simple rules or on sophisticated models) is developed.

Our connection goes on to say that even after an organization has implemented a fraud detection system that is based on sophisticated techniques and that can execute effectively in real time, it’s important for the operational staff to use the output recommendations of the system effectively. There are three ways that fraud management can improve results within even a highly sophisticated system like that of our LinkedIn connection.

The first strategy is never to allow operational staff to second-guess a sophisticated model at will. Very often, a model score of 900 (let’s say this is an indicator of very high fraud risk), when combined with some decision keys and sometimes on its own, can perform extremely well as a fraud predictor. It’s good practice to use the scores at this high risk range generated by a tested model as is and not allow individual analysts to adjust it further. This policy will have to be completely understood and controlled at the operational level. Using a well-developed fraud score as is without watering it down is one of the most important operational strategies for the long term success of any model. Application of this rule also makes it simpler to identify instances of model scoring failure by rendering them free of any subsequent analyst adjustments.

Second, fraud analysts will have to be trained to use the scores and the reason codes (reason codes explain why the score is indicative of risk) effectively in operations. Typically, this is done by writing some rules in operations that incorporate the scores and reason codes as decision keys. In the fraud management world, these rules are generally referred to as strategies. It’s extremely important to ensure strategies are applied uniformly by all fraud analysts. It’s also essential to closely monitor how the fraud analysts are operating using the scores and strategies.

Third, it’s very important to train the analysts to mark transactions that are confirmed or reported to be fraudulent by the organization’s customers accurately in their data store.

All three of these strategies may seem very straight forward to accomplish, but in practical terms, they are not that easy without a lot of planning, time, and energy. A superior fraud detection system can be rendered almost useless if it is not used correctly. It is extremely important to allow the right level of employee to exercise the right level of judgment.  Again, individual fraud analysts should not be allowed to second-guess the efficacy of a fraud score that is the result of a sophisticated model. Similarly, planners of operations should take into account all practical limitations while coming up with fraud strategies (fraud scenarios). Ensuring that all of this gets done the right way with the right emphasis ultimately leads the organization to good, effective fraud management.

At the heart of any fraud detection system is a rule or a model that attempts to detect a behavior that has been observed repeatedly in various frequencies in the past and classifies it as fraud or non-fraud with a certain rank ordering. We would like to figure out this behavior scenario in advance and stop it in its tracks. What we observe from historical data and our experience needs be converted to some sort of a rule that can be systematically applied to the data real-time in the future. We expect that these rules or models will improve our chance of detecting aberrations in behavior and help us distinguish between genuine customers and fraudsters in a timely manner. The goal is to stop the bleeding of cash from the account and to accomplish that as close to the start of the fraud episode as we can. If banks can accurately identify early indicators of on-going fraud, significant losses can be avoided.

In statistical terms, what we define as a fraud scenario would be the dependent variable or the variable we are trying to predict (or detect) using a model. We would try to use a few independent variables (as many of the variables used in the model tend to have some dependency on each other in real life) to detect fraud. Fundamentally, at this stage we are trying to model the fraud scenario using these independent variables. Typically, a model attempts to detect fraud as opposed to predict fraud. We are not trying to say that fraud is likely to happen on this entity in the future; rather, we are trying to determine whether fraud is likely happening at the present moment, and the goal of the fraud model is to identify this as close to the time that the fraud starts as possible.

In credit risk management, we try to predict if there will likely be serious delinquency or default risk in the future, based on the behavior exhibited in the entity today. With respect to detecting fraud, during the model-building process, not having accurate fraud data is akin to not knowing what the target is in a shooting range. If a model or rule is built on data that is only 75 percent accurate, it is going to cause the model’s accuracy and effectiveness to be suspect as well. There are two sides to this problem.  Suppose we mark 25 percent of the fraudulent transactions inaccurately as non-fraud or good transactions. Not only are we missing out on learning from a significant portion of fraudulent behavior, by misclassifying it as non-fraud, the misclassification leads to the model assuming the behavior is actually good behavior. Hence, misclassification of data affects both sides of the equation. Accurate fraud data is fundamental to addressing the fraud problem effectively.

So, in summary, collecting accurate fraud data is not the responsibility of just one set of people in any organization. The entire mind-set of the organization should be geared around collecting, preserving, and using this valuable resource effectively. Interestingly, our LinkedIn connection concludes, the fraud data challenges faced by a number of other industries are very similar to those faced by financial institutions such as his own. Banks are probably further along in fraud management and can provide a number of pointers to other industries, but fundamentally, the problem is the same everywhere. Hence, a number of techniques he details in this post are applicable to a number of industries, even though most of his experience is bank based. As fraud examiners and forensic accountants, we will no doubt witness the impact of the application of analytically based fraud risk management by an ever multiplying number of client industrial types.

Making Sure It Sticks

ComputerRaft2Download our Chapter’s Free App – RVACFESon Google Play!

As a follow-on to our last blog post (see To Have and to Hold immediately above), I thought I’d talk a little about the documents our investigating CFE was able to find.

These case documents proved critical to the examination and were found in both paper and digital form.   Of the two types of evidence, the digital documents proved the most voluminous and the trickiest from an investigative point of view.  Suspected frauds, such as the one our CFE reader was investigating, leave behind data on computer systems, all kinds of data. Despite the ubiquity of this digital evidence, though, it’s often overlooked, collected incorrectly, or analyzed ineffectively. The rub is that, if relevant evidence isn’t gathered at the very beginning of an investigation, it may be too late to do so later in the process. Therefore, ideally, a CFE’s client organization’s management should consider the importance of digital evidence from the outset of its operations and be prepared to gather it for a wide range of financial fraud related scenarios; indeed, most of the larger, more sophisticated companies, finding themselves routinely under cyber-attack, already do so.

It’s been my experience that many organizations underestimate just how often they may need to produce reliable evidence of what has happened in their information systems.  And, importantly, from the individual CFE’s point of view, they also may underestimate the demands that the legal system makes in terms of ensuring the admissibility and reliability of digital evidence. Unless an organization has developed a detailed incident response plan, much potential evidence will never be collected or will become worthless as a result of contamination. As a preliminary to any investigation involving digital data, CFE’s should assess whether the client organization has applied a consistent and effective approach to managing information security incidents, including staff and organizational responsibilities and procedures; not having done so can prove a significant legal problem for the client in court.  When a follow-up action against a person after an information security related fraud involves legal action, evidence should be collected, retained, and presented to conform to the rules for evidence promulgated by the relevant jurisdiction(s). The examination should also review whether documented procedures are developed and followed when collecting and presenting routine evidence for internal disciplinary actions.

Digital forensic readiness (DFR) focuses on proactively collecting and preserving potential digital evidence. This can limit business risk by providing support for all kinds of legal defense, civil litigation, criminal prosecution, internal disciplinary actions, intellectual property claims, and due care documentation.  It also can document the impact of a crime or disputed action for an insurance or damage claim. In addition, digital forensics can support the recovery process indirectly after an incident (something that proved very important for the client of our CFE in the ‘To Have and to Hold’ case).

When preparing data for use as evidence, all CFE’s know that it’s often necessary to provide further supporting information. It’s important to show that audit trail information can demonstrate that the system used to preserve evidence is functioning appropriately. It’s also important to demonstrate how information progresses through it. Audit trails need to be comprehensive and overseen appropriately, because without them the integrity and authenticity – and thus the evidential weight – of the data stored in the system could be questioned in court.  In addition to the system’s effectiveness, CFE’s need to be concerned with whether access to audit trail information was controlled adequately. In some applications, access may be needed infrequently, thus it’s important that the access procedures be documented.

In most jurisdictions, the legal admissibility of digital evidence (or any evidence) in a court of law is governed by three fundamental principles: relevance, reliability, and sufficiency. Digital evidence is relevant when it can prove or disprove an element of the specific case being investigated. Although the meaning of reliable (i.e., authentic and accurate) varies among jurisdictions, a general principle is to ensure the digital evidence is what it purports to be and has not been spoiled. It is not always necessary to collect all data or to make a complete copy of the original evidence. In many jurisdictions, the concept of sufficiency means that enough evidence has been collected to prove or disprove the elements of the matter.

Information security is key when discussing legal admissibility.  Was the process for capturing electronic information secure? Was the correct information captured, and was it complete and accurate? During storage, was the information changed in any way? When responding to questions by opposing counsel about the authenticity of stored information, organizations must show whether the system was operated correctly at all times. To address this issue, CFE’s should establish that all relevant procedures are well thought out, complete in scope, documented, and operated by competent individuals.

To reduce the risk of legal challenges, CFE’s should consider offering evidence that the client organization has implemented security measures. Management should have reviewed information security systems at planned intervals to determine whether their control objectives, controls, processes, and procedures:

–Conform to the requirements of information security standards and relevant regulations;
–Conform to the identified IT security requirements;
–Are implemented and maintained effectively;
–Are performing as expected.

Determining which digital evidence the organization should be collecting and preserving is a two-step process. First, the crimes and disputes the organization is exposed to must be determined. Second, based on the identified exposure, the organization needs to identify potential evidence based on a risk analysis combined with a cost/benefit approach.

DFR is a natural progression for organizations with a mature information security posture, enabling them to pursue perpetrators in the legal domain when other security measures have failed. Among more security-aware CFE clients, it can enhance existing processes and leverage incident response, business continuity, and crime prevention activities. CFE’s can provide assurance of their client organization’s forensic readiness based on the following criteria suggested by the ACFE:

–Whether the organization has identified the main likely threats it faces;
–Whether the organization has identified what sorts of evidence it is likely to need in a criminal proceeding and how it will secure that data;
–Whether the organization has identified the amount and quality of evidence it already has collected;
–Whether the organization is familiar with potential legal problems such as admissibility, data protection, human rights, limits to surveillance, obligations to staff members and others, and disclosure in legal proceedings;
— Whether the organization has identified the management, skill, and resource implications and developed an action plan.

CFE’s, as part of the planning for a fraud or incident investigation, should ensure the completeness and integrity of digital evidence. Moreover, they should ensure that potentially useful evidence is never overlooked.  A functioning and documented DFR supports such assurance and helps make sure that assurance sticks.

To Have and To Hold

SharingFiles2One of our CFE readers practicing abroad reports currently investigating the transactions of a key executive of a financial subsidiary of a large U.S. based company and finding that many documents critical to his examination simply have not been retained anywhere on the firm’s server farm; a problem much more common in our present e-world than many of us would like to think!  The documents weren’t on the servers simply because the firm’s document retention policy (DRP) published to its employees isn’t comprehensive enough to require them to be.

When our CFE’s client firm policy was written, the primary electronic document type was in the form of e-mail files stored on company servers. But today, electronic records also include text messages, instant messages, voice mail, and internet search histories, images on digital cameras, in cell phones and tablets, and scores of differing file types stored on a myriad personal devices and in the cloud.  In this environment, the importance of the DRP, as a living document, is right up there with other critical documentation like that concerning access control and physical security.  Each paper and electronic document type should be treated separately in the policy. Even in the case of e-mail – a technology that’s been ubiquitous for two decades – our Chapter members report finding retention practices are often spotty and messages sometimes difficult to search and retrieve. Rather than backing up all e-mails, for example, the policy might distinguish between e-mails with an attached signed contract and an e-mail inviting staff to the office holiday party. In addition, e-mails often end up residing in numerous locations.  Because real time monitoring of individuals’ personal computers would be impractical for any firm, a central electronic depository could be developed for contracts, tax returns, medical plans, pension statements, and other documents that have legal or regulatory holding limits, Also, all CFE’s must be constantly alert to new communication means and be prepared to adopt investigative modifications to deal quickly with them.

We’re all familiar with the many problems involving legal discovery.  Such requests primarily deal with centrally located files, but certain types of lawsuits, such as hostile work environment or sexual harassment, can also require discovery of personal files. Because no client management staff is large enough to verify that all employees follow prescribed rules, companies must rely on regular training to inform employees and confirm their compliance with company retention policy. Companies can reinforce this training by taking appropriate disciplinary measures against anyone who violates the rules. This reinforcement, of course, is based on the assumption that the organization already has appropriate controls in place and an effective process to gather the necessary data to monitor employee compliance. In the present case, our CFE reports that none of these controls proved to be in place; their absence will likely result in any subsequent prosecution of the targeted fraudster being either extremely difficult or impracticable.

Also, instant messages, like those used by our CFE’s executive target, illustrate the hidden complexity of contemporary document retention. Dealing with e-mail is relatively straight-forward compared with the issues surrounding instant messages. Instant messages provide a convenient way to transmit text, audio, and live streaming video, often outside the firewalls and other safeguards of a company’s main system, which creates greater technological and competitive risks. Of greater concern to CFE’s should be the content of the messages. An instant message constitutes a business correspondence; as such, the message is discoverable and must be included in any document retention plan. The organization should have an established plan for the recovery of the messages in their original form. The optimal time to formulate the plan is before legal action, not in the midst of it. Many organizations (again, like our CFE’s client) have document retention plans covering only paper-based correspondence or e-mail; management of the content of instant messages is not addressed.  In addition to instant messaging, individuals use text messaging, which takes place on personal devices like cell phones. If a company doesn’t have an instant messaging system (IMS), it should consider acquiring one. An IMS allows message backup and access in case of discovery. Storing the instant messages and allowing access to them after-the-fact can help mitigate organizational liability exposure and close fraud vulnerability and security holes in the system. At a minimum, this would demonstrate some due diligence to outside stakeholders. The issue boils down to having a clear policy, both in terms of digital media use and its retention. The retention policy would involve purging instant messages after they are a given period old. Use policies might include random monitoring – an important deterrent for abuse and a valuable means to gather sample data about use.

So CFE’s need to be aware that policy creation for present-day business communication technology is obviously much more complex and necessary than the document retention policies of the past. Past policies usually governed only workplace documents, whereas policies today also must govern documents that are generated and consumed on mobile devices away from the workplace. The document retention policies should include retention limits for each type of format. Employees should be trained and reminded of the policy and their responsibility to follow it. Targeted management reviews based on fraud risk assessments could be valuable and would reinforce the importance of following the policy. In addition to training employees to regularly cull e-mail and instant messages sent and received, Internet browser options should be set so cookies and images are purged when the Internet session is over and histories are discarded daily.

Retention policies also should stress the appropriate and acceptable uses of company equipment. During company training, employees should learn that sharing inappropriate texts, audio, or video files is unacceptable, and they should clearly understand the consequences for not following company policy. Unfortunately, the delineation between work time and personal time is often blurred. With more employees being on call beyond the standard 40-hour work week, employers need to be sensitive to employees’ needs to perform personal tasks while at work using corporate equipment, or to perform work-related activities with personal devices.  Certain questions must be asked, however, such as: If an employee uses a personal device and maintains personal and business files separately, would the personal files be discoverable? Would discoverability depend on whether the device was personally or company owned? It could be assumed that if the employer owns the device, all records are discoverable. If the employee owns the devise, privacy issues may come into play. Due diligence always demands that conservative guidelines be employed.

I recommended to our CFE reader that, in addition to consulting corporate attorneys and IT staff, he might consider providing management with recommendations about whether outside consultants are needed to help develop or modify a more up-to-date document retention policy. Also, because electronic data is often salvageable even after it’s been deleted, a computer forensic expert could provide valuable insights into both the development and implementation of a new policy. This expert would then have knowledge of the system and could provide assistance if the company is party to a lawsuit in the future. Contracting with a computer forensic expert on retainer allows the organization to receive regular feedback on changes in the state of the art in computing technology and best practices in the field. These experts are aware of the costs and burden of discovery under both poor and good retention policies, and they’re able to make recommendations that will save money should litigation arise.