Category Archives: Data Breaches

Client’s Card Security

Our Chapter recently got a question from a reader of this blog about data privacy; specifically she asked about the Payment Card Industry Data Security Standard (PCI DSS) and whether compliance with that standard’s requirements by a client would provide reasonable assurance that the client organization’s customer data privacy controls and procedures are adequate. The question came up in the course of a credit card fraud examination in which our reader’s small CPA firm was involved. A very good question indeed! The short answer, in my opinion, is that, although PCI DSS compliance audits cover some aspects of data privacy, because they’re limited to credit cards, PCI DSS audits would not, in themselves be sufficient to convince a jury that data privacy is adequately protected throughout a whole organization. The question is interesting because of its bearing on the fraud risk assessments CFE’s routinely conduct. The question is important because CFE’s should understand the scope (and limitations) of PCI DSS compliance activities within their client organizations and communicate the differences when reviewing corporate-wide data privacy for fraud prevention purposes. This understanding will also tend to prevent any potential misunderstandings over duplication of review efforts with business process owners and fraud examination clients.

Given all the IT breeches and intrusions happening daily, consumers are rightly cynical these days about businesses’ ability to protect their personal data. They report that they’re much more willing to do business with companies that have independently verified privacy policies and procedures. In-depth privacy fraud risk assessments can help organizations assess their preparedness for the outside review that inevitably follows a major customer data privacy breach. As I’m sure all the readers of this blog know, data privacy generally applies to information that can be associated with a specific individual or that has identifying characteristics that might be combined with other information to indicate a specific person. Such personally identifiable information (PII) is defined as any piece of data that can be used to uniquely identify, contact, or locate a single person. Information can be considered private without being personally identifiable. Sensitive personal data includes individual preferences, confidential financial or health information, or other personal information. An assessment of data privacy fraud risk encompasses the policy, controls, and procedures in place to protect PII.

In planning a fraud risk assessment of data privacy, CFE’s auditors should evaluate or consider based on risk:

–The consumer and employee PII that the client organization collects, uses, retains, discloses, and discards.
–Privacy contract requirements and risk liabilities for all outsourcing partners, vendors, contractors, and other third parties involving sharing and processing of the organization’s consumer and employee data.
–Compliance with privacy laws and regulations impacting the organization’s specific business and industry.
–Previous privacy breaches within the organization and its third-party service providers, and reported breaches for similar organizations noted by clearing houses like Dunn &
Bradstreet and in the client industry’s trade press.
–The CFE should also consult with the client’s corporate legal department before undertaking the review to determine whether all or part of the assessment procedure should be performed at legal direction and protected as “attorney-client privileged” work products.

The next step in a privacy fraud risk assessment is selecting a framework for the review.
Two frameworks to consider are the American Institute of Certified Public Accountants (AICPA) Privacy Framework and The IIA’s Global Audit Technology Guide: Managing and Auditing Privacy Risks. For ACFE training purposes, one CFE working for a well know on-line retailer reported organizing her fraud assessment report based on the AICPA framework. The CFE chose that methodology because it would be understood and supported easily by management, external auditors, and the audit committee. The AICPA’s ten component framework was useful in developing standards for the organization as well as for an assessment framework:

–Management. The organization defines, documents, communicates, and assigns accountability for its privacy policies and procedures.
–Notice. The organization provides notice about its privacy policies and procedures and identifies the purposes for which PII is collected, used, retained, and disclosed.
–Choice and Consent. The organization describes the choices available to the individual customer and obtains implicit or explicit consent with respect to the collection, use, and disclosure of PII.
–Collection. The organization collects PII only for the purposes identified in the Notice.
–Use, Retention, and Disposal. The organization limits the use of PII to the purposes identified in the Notice and for which the individual customer has provided implicit or explicit consent. The organization retains these data for only as long as necessary to fulfill the stated purposes or as required by laws or regulations, and thereafter disposes of such information appropriately.
–Access. The organization provides individual customers with access to their PII for review and update.
–Disclosure to Third Parties. The organization discloses PII to third parties only for the purposes identified in the Notice and with the implicit or explicit consent of the individual.
–Security for Privacy. The organization protects PII against unauthorized physical and logical access.
–Quality. The organization maintains accurate, complete, and relevant PII for the purposes identified in the Notice.
–Monitoring and Enforcement. The organization monitors compliance with its privacy policies and procedures and has procedures to address privacy complaints and disputes.

Using the detailed assessment procedures in the framework, the CFE, working with internal client staff, developed specific testing procedures for each component, which were performed over a two-month period. Procedures included traditional walkthroughs of processes, interviews with individuals responsible for IT security, technical testing of IT security and infrastructure controls, and review of physical storage facilities for documents with PII. Technical scanning was performed independently by the retailer’s IT staff, which identified PII on servers and some individual personal computers erroneously excluded from compliance monitoring. Facilitated sessions with the CFE and individuals responsible for PII helped identify problem areas. The fraud risk assessment dramatically increased awareness of data privacy and identified several opportunities to strengthen ownership, accountability, controls, procedures, and training. As a result of the assessment, the retailer implemented a formal data classification scheme and increased IT security controls. Several of the vulnerabilities and required enhancements involved controls over hard-copy records containing PII. Management reacted to the overall report positively and requested that the CFE schedule future recurring views of fraudulent privacy breech vulnerability.

Fraud risk assessments of client privacy programs can help make the business case within any organization for focusing on privacy now, and for promoting organizational awareness of privacy issues and threats. This is one of the most significant opportunities for fraud examiners to help assess risks and identify potential gaps that are daily proving so devastating if left unmanaged.

The Know It All

As fraud examiners intimately concerned with the general on-going state of health of fraud management and response systems, we find ourselves constantly looking at the integrity of the data that’s truly the life blood of today’s client organizations.  We’re constantly evaluating the network of anti-fraud controls we hope will help keep those pesky, uncontrolled, random data vulnerabilities to a minimum.   Every little bit of critical information that gets mishandled or falls through the cracks, every transaction that doesn’t get recorded, every anti-fraud policy or procedure that’s misapplied has some effect on the client’s overall fraud management picture. 

When it comes to managing its client, financial and payment data, almost every organization has a Pauline.  Pauline’s the person everyone goes to get the answers about data, and the state of the system(s) that process it, that no one else in her unit ever seems to have.  That’s because Pauline is an exceptional employee with years of detailed hands-on-experience in daily financial system operations and maintenance.  Pauline is also an example of the extraordinary level of dependence that many organizations have today on a small handful of their key employees.   The great recession of past memory where enterprises relied on retaining the experienced employees they had rather than on traditional hiring and cross-training practices only exacerbated a still existing, ever growing trend.  The very real threat to the fraud management system that the Pauline’s of the corporate data world pose is not so much that they will commit fraud themselves (although that’s an ever present possibility) but that they will retire or get another job out of state, taking their vital knowledge of the company systems and data with them. 

The day after Pauline’s retirement party and, to an increasing degree thereafter, it will dawn on  Pauline’s unit management that it’s lost a large amount of valuable information about the true state of its data and financial processing system(s), of its total lack of a large amount of system critical data documentation that’s been carried around nowhere but in Jane’s head.  The point is that, for some organizations, their reliance on a few key employees for day to day, operationally related information on their data goes well beyond what’s appropriate and constitutes an unacceptable level of risk to their fraud prevention system.  Today’s newspapers and the internet are full of stories about data breeches, only reinforcing the importance of vulnerable data and of its documentation to the on-going operational viability of our client organizations. 

Anyone whose investigated frauds involving large scale financial systems (insurance claims, bank records, client payment information) is painfully aware that when the composition of data changes (field definitions or content) surprisingly little of that change related information is ever formally documented.  Most of the information is stored in the heads of some key employees, and those key employees aren’t necessarily the ones involved in everyday, routine data management projects.  There’s always a significant level of detail that’s gone undocumented, left out or to chance, and it becomes up to the analyst of the data (be s/he an auditor, a management scientist, a fraud examiner or other assurance professional) to find the anomalies and question them.  The anomalies might be in the form of missing data, changes in data field definitions, or change in the content of the fields; the possibilities are endless.  Without proper, formal documentation, the immediate or future significance of these types of anomalies for the fraud management systems and for the overall fraud risk assessment process itself become almost impossible to determine.   

If our auditor or fraud examiner, operating under today’s typical budget or time constraints,  is not very thorough and misses even finding some of these anomalies, they can end up never being addressed.   How many times as an analyst have you tried to explain something (like apparently duplicate transactions) about the financial system that just doesn’t look right only to be told, “Oh, yeah.  Pauline made that change back in February before she retired; we don’t have too many details on it.”  In other words, undocumented changes to transactions and data, details of which are now only existent in Pauline’s head.  When a data driven system is built on incomplete information, the system can be said to have failed in its role as a component of overall fraud management.  The cycle of incomplete information gets propagated to future decisions, and the cost of the missing or inadequately explained data can be high.  What can’t be seen, can’t ever be managed or even explained. 

It’s truly humbling for any practitioner to experience how much critical financial information resides in the fading (or absent) memories of past or present key employees.  As fraud examiners we should attempt to foster a culture among our clients supportive of the development of concurrent transaction related documentation and the sharing of knowledge on a consistent basis for all systems but especially in matters involving changes to critical financial systems.  One nice benefit of this approach, which I brought to the attention of one of my clients not too long ago, would be to free up the time of one of these key employees to work on more productive fraud control projects rather than constantly serving as the encyclopedia for the rest of the operational staff. 

Analytic Reinforcements

Rumbi’s post of last week on ransomware got me thinking on a long drive back from Washington about what an excellent tool the AICPA’s new Cybersecurity Risk Management Reporting Framework is, not only for CPAs but for CFEs as well as for all our client organizations. As the seemingly relentless wave of cyberattacks continues with no sign of let up, organizations are under intense pressure from key stakeholders and regulators to implement and enhance their cyber security and fraud prevention programs to protect customers, employees and all the types of valuable information in their possession.

According to research from the ACFE, the average total cost per company, per event of a data breach is $3.62 million. Initial damage estimates of a single breach, while often staggering, may not take into account less obvious and often undetectable threats such as the theft of intellectual property, espionage, destruction of data, attacks on core operations or attempts to disable critical infrastructure. These effects can knock on for years and have devastating financial, operational and brand impact ramifications.

Given the present broad regulatory pressures to tighten cyber security controls and the visibility surrounding cyberrisk, a number of proposed regulations focused on improving cyber security risk management programs have been introduced in the United States over the past few years by our various governing bodies. One of the more prominent is a regulation by the New York Department of Financial Services (NYDFS) that prescribes certain minimum cyber security standards for those entities regulated by the NYDFS. Based on an entity’s risk assessment, the NYDFS law has specific requirements around data encryption and including data protection and retention, third-party information security, application security, incident response and breach notification, board reporting, and required annual re-certifications.

However, organizations continue to report to the ACFE regarding their struggle to systematically report to stakeholders on the overall effectiveness of their cyber security risk management programs. In response, the AICPA in April of last year released a new cyber security risk management reporting framework intended to help organizations expand cyberrisk reporting to a broad range of internal and external users, to include management and the board of directors. The AICPA’s new reporting framework is designed to address the need for greater stakeholder transparency by providing in-depth, easily consumable information about the state of an organization’s cyberrisk management program. The cyber security risk management examination uses an independent, objective reporting approach and employs broader and more flexible criteria. For example, it allows for the selection and utilization of any control framework considered suitable and available in establishing the entity’s basic cyber security objectives and in developing and maintaining controls within the entity’s cyber security risk management program irregardless of whether the standard is the US National Institute of Standards and Technology (NIST)’s Cybersecurity Framework, the International Organization for standardization (ISO)’s ISO 27001/2 and related frameworks, or even an internally developed framework based on a combination of sources. The examination is voluntary, and applies to all types of entities, but should be considered by CFEs as a leading practice that provides management, boards and other key stakeholders with clear insight into the current state of an organization’s cyber security program while identifying gaps or pitfalls that leave organizations vulnerable to cyber fraud and other intrusions.

What stakeholders might benefit from a client organization’s cyber security risk management examination report? Clearly, we CFEs as we go about our routine fraud risk assessments; but such a report, most importantly, can be vital in helping an organization’s board of directors establish appropriate oversight of a company’s cyber security risk program and credibly communicate its effectiveness to stakeholders, including investors, analysts, customers, business partners and regulators. By leveraging this information, boards can challenge management’s assertions around the effectiveness of their cyberrisk management and fraud prevention programs and drive more effective decision making. Active involvement and oversight from the board can help ensure that an organization is paying adequate attention to cyberrisk management and displaying due diligence. The board can help shape expectations for reporting on cyberthreats while also advocating for greater transparency and assurance around the effectiveness of the program.

The cyber security risk management report in its initial and follow-up iterations can be invaluable in providing overview guidance to CFEs and forensic accountants in targeting both fraud prevention and fraud detection/investigative analytics. We know from our ACFE training that data analytics need to be fully integrated into the investigative process. Ensuring that data analytics are embedded in the detection/investigative process requires support from all levels, starting with the managing CFE. It will be an easier, more coherent process for management to support such a process if management is already supporting cyber security risk management reporting. Management will also have an easier time reinforcing the use of analytics generally, although the data analytics function supporting fraud examination will still have to market its services, team leaders will still be challenged by management, and team members will still have to be trained to effectively employ the newer analytical tools.

The presence of a robust cyber security risk management reporting process should also prove of assistance to the lead CFE in establishing goals for the implementation and use of data analytics in every investigation, and these goals should be communicated to the entire investigative team. It should be made clear to every level of the client organization that data analytics will support the investigative planning process for every detected fraud. The identification of business processes, IT systems, data sources, and potential analytic routines should be discussed and considered not only during planning, but also throughout every stage of the entire investigative engagement. Key in obtaining the buy-in of all is to include investigative team members in identifying areas or tests that the analytics group will target in support of the field work. Initially, it will be important to highlight success stories and educate managers and team leaders about what is possible. Improving on the traditional investigative approach of document review, interviewing, transaction review, etc. investigators can benefit from the implementation of data analytics to allow for more precise identification of the control deficiencies, instances of noncompliance with policies and procedures, and mis-assessment of areas of high risk that contributed to the development of the fraud in the first place. These same analytics can then be used to ensure that appropriate post-fraud management follow-up has occurred by elevating the identified deficiencies to the cyber security risk management reporting process and by implementing enhanced fraud prevention procedures in areas of higher fraud risk. This process would be especially useful in responding to and following up data breaches.

Once patterns are gathered and centralized, analytics can be employed to measure the frequency of occurrence, the bit sizes, the quantity of files executed and average time of use. The math involved allows an examiner to grasp the big picture. Individuals, including examiners, are normally overwhelmed by the sheer volume of information, but automation of pattern recognizing techniques makes big data a tractable investigative resource. The larger the sample size, the easier it is to determine patterns of normal and abnormal behavior. Network haystacks are bombarded by algorithms that can notify the CFE information archeologist about the probes of an insider threat for example.

Without analytics, enterprise-level fraud examination and risk assessment is a diminished discipline, limited in scope and effectiveness. Without an educated investigative workforce, armed with a programing language for automation and an accompanying data-mining philosophy and skill set, the control needs of management leaders at the enterprise level will go unmet; leaders will not have the data needed for fraud prevention on a large scale nor a workforce that is capable of getting them that data in the emergency following a breach or penetration.

The beauty of analytics, from a security and fraud prevention perspective, is that it allows the investigative efforts of the CFE to align with the critical functions of corporate business. It can be used to discover recurring risks, incidents and common trends that might otherwise have been missed. Establishing numerical baselines on quantified data can supplement a normal investigator’s tasks and enhance the auditor’s ability to see beneath the surface of what is presented in an examination. Good communication of analyzed data gives decision makers a better view of their systems through a holistic approach, which can aid in the creation of enterprise-level goals. Analytics and data mining always add dimension and depth to the CFE’s examination process at the enterprise level and dovetail with and are supported beautifully by the AICPA’s cyber security risk management reporting initiative.

CFEs should encourage the staffs of client analytics support functions to possess …

–understanding of the employing enterprise’s data concepts (data elements, record types, database types, and data file formats).
–understanding of logical and physical database structures.
–the ability to communicate effectively with IT and related functions to achieve efficient data acquisition and analysis.
–the ability to perform ad hoc data analysis as required to meet specific fraud examiner and fraud prevention objectives.
–the ability to design, build, and maintain well-documented, ongoing automated data analysis routines.
–the ability to provide consultative assistance to others who are involved in the application of analytics.

Forensic Data Analysis

As a long term advocate of big data based solutions to investigative challenges, I have been interested to see the recent application of such approaches to the ever-growing problem of data beaches. More data is stored electronically than ever before, financial data, marketing data, customer data, vendor listings, sales transactions, email correspondence, and more, and evidence of fraud can be located anywhere within those mountains of data. Unfortunately, fraudulent data often looks like legitimate data when viewed in the raw. Taking a sample and testing it might not uncover fraudulent activity. Fortunately, today’s fraud examiners have the ability to sort through piles of information by using special software and data analysis techniques. These methods can identify future trends within a certain industry, and they can be configured to identify breaks in audit control programs and anomalies in accounting records.

In general, fraud examiners perform two primary functions to explore and analyze large amounts of data: data mining and data analysis. Data mining is the science of searching large volumes of data for patterns. Data analysis refers to any statistical process used to analyze data and draw conclusions from the findings. These terms are often used interchangeably. If properly used, data analysis processes and techniques are powerful resources. They can systematically identify red flags and perform predictive modeling, detecting a fraudulent situation long before many traditional fraud investigation techniques would be able to do so.

Big data are high volume, high velocity, and/or high variety information assets that require new forms of processing to enable enhanced decision making, insight discovery, and process optimization. Simply put, big data is information of extreme size, diversity, and complexity. In addition to thinking of big data as a single set of data, fraud investigators and forensic accountants are conceptualizing about the way data grow when different data sets are connected together that might not normally be connected. Big data represents the continuous expansion of data sets, the size, variety, and speed of generation of which makes it difficult for investigators and client managements to manage and analyze.

Big data can be instrumental to the evidence gathering phase of an investigation. Distilled down to its core, how do fraud examiners gather data in an investigation? They look at documents and financial or operational data, and they interview people. The challenge is that people often gravitate to the areas with which they are most comfortable. Attorneys will look at documents and email messages and then interview individuals. Forensic accounting professionals will look at the accounting and financial data (structured data). Some people are strong interviewers. The key is to consider all three data sources in unison.

Big data helps to make it all work together to bring the complete picture into focus. With the ever-increasing size of data sets, data analytics has never been more important or useful. Big data requires the use of creative and well-planned analytics due to its size and complexity. One of the main advantages of using data analytics in a big data environment is that it allows the investigator to analyze an entire population of data rather than having to choose a sample and risk drawing erroneous conclusions in the event of a sampling error.

To conduct an effective data analysis, a fraud examiner must take a comprehensive approach. Any direction can (and should) be taken when applying analytical tests to available data. The more creative fraudsters get in hiding their breach-related schemes, the more creative the fraud examiner must become in analyzing data to detect these schemes. For this reason, it is essential that fraud investigators consider both structured and unstructured data when planning their engagements.

Data are either structured or unstructured. Structured data is the type of data found in a database, consisting of recognizable and predictable structures. Examples of structured data include sales records, payment or expense details, and financial reports. Unstructured data, by contrast, is data not found in a traditional spreadsheet or database. Examples of unstructured data include vendor invoices, email and user documents, human resources files, social media activity, corporate document repositories, and news feeds. When using data analysis to conduct a fraud examination, the fraud examiner might use structured data, unstructured data, or a combination of the two. For example, conducting an analysis on email correspondence (unstructured data) among employees might turn up suspicious activity in the purchasing department. Upon closer inspection of the inventory records (structured data), the fraud examiner might uncover that an employee has been stealing inventory and covering her tracks in the record.

Recent reports of breach responses detailed in social media and the trade press indicate that those investigators deploying advanced forensic data analysis tools across larger data sets provided better insights into the penetration, which lead to more focused investigations, better root cause analysis and contributed to more effective fraud risk management. Advanced technologies that incorporate data visualization, statistical analysis and text-mining concepts, as compared to spreadsheets or relational database tools, can now be applied to massive data sets from disparate sources enhancing breach response at all organizational levels.

These technologies enable our client companies to ask new compliance questions of their data that they might not have been able to ask previously. Fraud examiners can establish important trends in business conduct or identify suspect transactions among millions of records rather than being forced to rely on smaller samplings that could miss important transactions.

Data breaches bring enhanced regulatory attention. It’s clear that data breaches have raised the bar on regulators’ expectations of the components of an effective compliance and anti-fraud program. Adopting big data/forensic data analysis procedures into the monitoring and testing of compliance can create a cycle of improved adherence to company policies and improved fraud prevention and detection, while providing additional comfort to key stakeholders.

CFEs and forensic accountants are increasingly being called upon to be members of teams implementing or expanding big data/forensic data analysis programs so as to more effectively manage data breaches and a host of other instances of internal and external fraud, waste and abuse. To build a successful big data/forensic data analysis program, your client companies would be well advised to:

— begin by focusing on the low-hanging fruit: the priority of the initial project(s) matters. The first and immediately subsequent projects, the low-hanging investigative fruit, normally incurs the largest cost associated with setting up the analytics infrastructure, so it’s important that the first few investigative projects yield tangible results/recoveries.

— go beyond usual the rule-based, descriptive analytics. One of the key goals of forensic data analysis is to increase the detection rate of internal control noncompliance while reducing the risk of false positives. From a technology perspective, client’s internal audit and other investigative groups need to move beyond rule-based spreadsheets and database applications and embrace both structured and unstructured data sources that include the use of data visualization, text-mining and statistical analysis tools.

— see that successes are communicated. Share information on early successes across divisional and departmental lines to gain broad business process support. Once validated, success stories will generate internal demand for the outputs of the forensic data analysis program. Try to construct a multi-disciplinary team, including information technology, business users (i.e., end-users of the analytics) and functional specialists (i.e., those involved in the design of the analytics and day-to-day operations of the forensic data analysis program). Communicate across multiple departments to keep key stakeholders assigned to the fraud prevention program updated on forensic data analysis progress under a defined governance program. Don’t just seek to report instances of noncompliance; seek to use the data to improve fraud prevention and response. Obtain investment incrementally based on success, and not by attempting to involve the entire client enterprise all at once.

—leadership support will gets the big data/forensic data analysis program funded, but regular interpretation of the results by experienced or trained professionals are what will make the program successful. Keep the analytics simple and intuitive; don’t try to cram too much information into any one report. Invest in new, updated versions of tools to make analytics sustainable. Develop and acquire staff professionals with the required skill sets to sustain and leverage the forensic data analysis effort over the long-term.
Finally, enterprise-wide deployment of forensic data analysis takes time; clients shouldn’t be lead to expect overnight adoption; an analytics integration is a journey, not a destination. Quick-hit projects might take four to six weeks, but the program and integration can take one to two years or more.

Our client companies need to look at a broader set of risks, incorporate more data sources, move away from lightweight, end-user, desktop tools and head toward real-time or near-real time analysis of increased data volumes. Organizations that embrace these potential areas for improvement can deliver more effective and efficient compliance programs that are highly focused on identifying and containing damage associated with hacker and other exploitation of key high fraud-risk business processes.

Regulating the Financial Data Breach

During several years of my early career, I was employed as a Manager of Operations Research by a mid-sized bank holding company. My small staff and I would endlessly discuss issues related to fraud prevention and develop techniques to keep our customer’s checking and savings accounts safe, secure and private. A never ending battle!

It was a simpler time back then technically but since a large proportion of fraud committed against banks and financial institutions today still involves the illegal use of stolen customer or bank data, some of the newest and most important laws and regulations that management assurance professionals, like CFEs, must be aware of in our practice, and with which our client banks must comply, relate to the safeguarding of confidential data both from internal theft and from breaches of the bank’s information security defenses by outside criminals.

As the ACFE tells us, there is no silver bullet for fully protecting any organization from the ever growing threat of information theft. Yet full implementation of the measures specified by required provisions of now in place federal banking regulators can at least lower the risk of a costly breach occurring. This is particularly true since the size of recent data breaches across all industries have forced Federal enforcement agencies to become increasingly active in monitoring compliance with the critical rules governing the safeguarding of customer credit card data, bank account information, Social Security numbers, and other personal identifying information. Among these key rules are the Federal Reserve Board’s Interagency Guidelines Establishing Information Security Standards, which define customer information as any record containing nonpublic personal information about an individual who has obtained a financial product or service from an institution that is to be used primarily for personal, family, or household purposes and who has an ongoing relationship with the institution.

Its important to realize that, under the Interagency Guidelines, customer information refers not only to information pertaining to people who do business with the bank (i.e., consumers); it also encompasses, for example, information about (1) an individual who applies for but does not obtain a loan; (2) an individual who guarantees a loan; (3) an employee; or (4) a prospective employee. A financial institution must also require, by contract, its own service providers who have access to consumer information to develop appropriate measures for the proper disposal of the information.

The FRB’s Guidelines are to a large extent drawn from the information protection provisions of the Gramm Leach Bliley Act (GLBA) of 1999, which repealed the Depression-era Glass-Steagall Act that substantially restricted banking activities. However, GLBA is best known for its formalization of legal standards for the protection of private customer information and for rules and requirements for organizations to safeguard such information. Since its enactment, numerous additional rules and standards have been put into place to fine-tune the measures that banks and other organizations must take to protect consumers from the identity-related crimes to which information theft inevitably leads.

Among GLBA’s most important information security provisions affecting financial institutions is the so-called Financial Privacy Rule. It requires banks to provide consumers with a privacy notice at the time the consumer relationship is established and every year thereafter.

The notice must provide details collected about the consumer, where that information is shared, how that information is used, and how it is protected. Each time the privacy notice is renewed, the consumer must be given the choice to opt out of the organization’s right to share the information with third-party entities. That means that if bank customers do not want their information sold to another company, which will in all likelihood use it for marketing purposes, they must indicate that preference to the financial institution.

CFEs should note , that most pro-privacy advocacy groups strongly object to this and other privacy related elements of GLBA because, in their view, these provisions do not provide substantive protection of consumer privacy. One major advocacy group has stated that GLBA does not protect consumers because it unfairly places the burden on the individual to protect privacy with an opt-out standard. By placing the burden on the customer to protect his or her data, GLBA weakens customer power to control their financial information. The agreement’s opt-out provisions do not require institutions to provide a standard of protection for their customers regardless of whether they opt-out of the agreement. This provision is based on the assumption that financial companies will share information unless expressly told not to do so by their customers and, if customers neglect to respond, it gives institutions the freedom to disclose customer nonpublic personal information.

CFEs need to be aware, however, that for bank clients, regardless of how effective, or not, GLBA may be in protecting customer information, noncompliance with the Act itself is not an option. Because of the current explosion in breaches of bank information security systems, the privacy issue has to some degree been overshadowed by the urgency to physically protect customer data; for that reason, compliance with the Interagency Guidelines concerning information security is more critical than ever. The basic elements partially overlap with the preventive measures against internal bank employee abuse of the bank’s computer systems. However, they go quite a bit further by requiring banks to:

—Design an information security program to control the risks identified through a security risk assessment, commensurate with the sensitivity of the information and the complexity and scope of its activities.
—Evaluate a variety of policies, procedures, and technical controls and adopt those measures that are found to most effectively minimize the identified risks.
—Application and enforcement of access controls on customer information systems, including controls to authenticate and permit access only to authorized individuals and to prevent employees from providing customer information to unauthorized individuals who may seek to obtain this information through fraudulent means.
—Access restrictions at physical locations containing customer information, such as buildings, computer facilities, and records storage facilities to permit access only to authorized individuals.
—Encryption of electronic customer information, including while in transit or in storage on networks or systems to which unauthorized individuals may gain access.
—Procedures designed to ensure that customer information system modifications are consistent with the institution’s information security program.
—Dual control procedures, segregation of duties, and employee background checks for employees with responsibilities for or access to customer information.
—Monitoring systems and procedures to detect actual and attempted attacks on or intrusions into customer information systems.
—Response programs that specify actions to be taken when the institution suspects or detects that unauthorized individuals have gained access to customer information systems, including appropriate reports to regulatory and law enforcement agencies.
—Measures to protect against destruction, loss, or damage of customer information due to potential environmental hazards, such as fire and water damage or technological failures.

The Interagency Guidelines require a financial institution to determine whether to adopt controls to authenticate and permit only authorized individuals access to certain forms of customer information. Under this control, a financial institution also should consider the need for a firewall to safeguard confidential electronic records. If the institution maintains Internet or other external connectivity, its systems may require multiple firewalls with adequate capacity, proper placement, and appropriate configurations.

Similarly, the institution must consider whether its risk assessment warrants encryption of electronic customer information. If it does, the institution must adopt necessary encryption measures that protect information in transit, in storage, or both. The Interagency Guidelines do not impose specific authentication or encryption standards, so it is advisable for CFEs to consult outside experts on the technical details applicable to your client institution’s security requirements especially when conducting after the fact fraud examinations.

The financial institution also must consider the use of an intrusion detection system to alert it to attacks on computer systems that store customer information. In assessing the need for such a system, the institution should evaluate the ability, or lack thereof, of its staff to rapidly and accurately identify an intrusion. It also should assess the damage that could occur between the time an intrusion occurs and the time the intrusion is recognized and action is taken.

The regulatory agencies have also provided our clients with requirements for responding to information breaches. These are contained in a related document entitled Interagency Guidance on Response Programs for Unauthorized Access to Customer Information and Customer Notice (Incident Response Guidance). According to the Incident Response Guidance, a financial institution should develop and implement a response program as part of its information security program. The response program should address unauthorized access to or use of customer information that could result in substantial harm or inconvenience to a customer.

Finally, the Interagency Guidelines require financial institutions to train staff to prepare and implement their information security programs. The institution should consider providing specialized training to ensure that personnel sufficiently protect customer information in accordance with its information security program.

For example, an institution should:

—Train staff to recognize and respond to schemes to commit fraud or identity theft, such as guarding against pretext spam calling.
—Provide staff members responsible for building or maintaining computer systems and local and wide area networks with adequate training, including instruction about computer security.
—Train staff to properly dispose of customer information.

Authority Figures

As fraud examiners and forensic accountants intimately concerned with the on-going state of health of our client’s fraud management programs, we find ourselves constantly looking at the integrity of the critical data that’s truly (as much as financial capital) the life blood of today’s organizations. We’re constantly evaluating the network of anti-fraud controls we hope will help keep those pesky, uncontrolled, random data driven vulnerabilities to fraud to a minimum. Every little bit of critical financial information that gets mishandled or falls through the cracks, every transaction that doesn’t get recorded, every anti-fraud policy or procedure that’s misapplied has some effect on the client’s overall fraud management picture and on our challenge.

When it comes to managing its client, financial and payment data, almost every small to medium sized organization has a Sandy. Sandy’s the person to whom everyone goes to get the answers about data, and the state of system(s) that process it; quick answers that no one else ever seems to have. That’s because Sandy is an exceptional employee with years of detailed hands-on-experience in daily financial system operations and maintenance. Sandy is also an example of the extraordinary level of dependence that many organizations have today on a small handful of their key employees. The now unlamented great recession, during which enterprises relied on retaining the experienced employees they had rather than on traditional hiring and cross-training practices, only exacerbated an existing, ever growing trend. The very real threat to the Enterprise Fraud Management system that the Sandy’s of the corporate data world pose is not so much that they will commit fraud themselves (although that’s an ever-present possibility) but that they will retire or get another job across town or out of state, taking their vital knowledge of company systems and data with them.

The day after Sandy’s retirement party and, to an increasing degree thereafter, it will dawn on Sandy’s management that it’s lost a large amount of information about the true state of its data and financial processing system(s). Management will also become aware, if it isn’t already, of its lack of a large amount of system critical data documentation that’s been carried around nowhere else but in Sandy’s head. The point is that, for some smaller organizations, their reliance on a few key employees for day to day, operationally related information goes well beyond what’s appropriate and constitutes an unacceptable level of risk to their entire fraud prevention programs. Today’s newspapers and the internet are full of stories about hacking and large-scale data breeches, that only reinforce the importance of vulnerable data and of the completeness of its documentation to the on-going operational viability of our client organizations.

Anyone whose investigated frauds involving large scale financial systems (insurance claims, bank records, client payment information) is painfully aware that when the composition of data changes (field definitions or content) surprisingly little of change related information is formally documented. Most of the information is stored in the heads of some key employees, and those key employees aren’t necessarily involved in everyday, routine data management projects. There’s always a significant level of detail that’s gone undocumented, left out or to chance, and it becomes up to the analyst of the data (be s/he an auditor, a management scientist, a fraud examiner or other assurance professional) to find the anomalies and question them. The anomalies might be in the form of missing data, changes in data field definitions, or changes in the content of the fields; the possibilities are endless. Without proper, formal documentation, the immediate or future significance of these types of anomalies for the fraud management system and for the overall fraud risk assessment process itself become almost impossible to determine.

If our auditor or fraud examiner, operating under today’s typical budget or time constraints, is not very thorough and misses the identification of some of these anomalies, they can end up never being addressed. How many times as an analyst have we all tried to explain something (like apparently duplicate transactions) about the financial system that just doesn’t look right only to be told, “Oh, yeah. Sandy made that change back in February before she retired; we don’t have too many details on it.” In other words, undocumented changes to transactions and data, details of which are now only existent in Sandy’s no longer available head. When a data driven system is built on incomplete information, the system can be said to have failed in its role as a component of the origination’s fraud prevention program. The cycle of incomplete information gets propagated to future decisions, and the cost of the missing or inadequately explained data can be high. What can’t be seen, can’t ever be managed or even explained.

In summary, it’s a truly humbling to experience to be confronted with how much critical financial information resides in the fading (or absent) memories of past or present key employees; what the ACFE calls authority figures. As fraud examiners we should attempt to foster a culture among our clients supportive of the development of concurrent systems of transaction related documentation and the sharing of knowledge on a consistent basis about all systems but especially regarding the recording of changes to critical financial systems. One nice benefit of this approach, which I brought to the attention of one of my audit clients not too long ago, would be to free up the time of one of these key employees to work on more productive fraud control projects rather than serving as the encyclopedia for the rest of the operational staff.

Every Seat Taken!

Our Chapter’s thanks to all our attendees and to our partners, the Virginia State Police and national ACFE for the unqualified success of our May training event, Cyberfraud and Data Breaches! Our speaker, Cary Moore, CFE, CISSP, conducted a fully interactive, two-day session on one of the most challenging and relevant topics confronting practicing fraud examiners and forensic accountants today.

The event examined the potential avenues of data loss and guided attendees through the crucial strategies needed to mitigate the threat of malicious data theft and the risk of inadvertent data loss, recognizing that information is a valuable asset, and that management must take proactive steps to protect the organization’s intellectual property. As Cary forcefully pointed out, the worth of businesses is no longer based solely on tangible assets and revenue-making potential; the information the organization develops, stores, and collects accounts for a large share of its value.

A data breach occurs when there is a loss or theft of, or unauthorized access to, proprietary information that could result in compromising the data. It is essential that management understand the crisis its organization might face if its information is lost or stolen. Data breaches incur not only high financial costs but can also have a lasting negative effect on an organization’s brand and reputation.

Protecting information assets is especially important because the threats to such assets are on the rise, and the cost of a data breach increases with the number of compromised records. According to a 2017 study by the Ponemon Institute, data breaches involving fewer than 10,000 records caused an average loss of $1.9 million, while beaches with more than 50,000 compromised records caused an average loss of $6.3 million. However, before determining how to protect information assets, it is important to understand the nature of these assets and the many methods by which they can be breached.

Intellectual property is a catchall phrase for knowledge-based assets and capital, but it’s helpful to think of it as intangible proprietary information. Intellectual property (IP) is protected by law. IP law grants certain exclusive rights to owners of a variety of intangible assets. These rights incentivize individuals, company leaders, and investors to allocate the requisite resources to research, develop, and market original technology and creative works.

A trade secret is any idea or information that gives its owner an advantage over its competitors. Trade secrets are particularly susceptible to theft because they provide a competitive advantage. What constitutes a trade secret, however, depends on the organization, industry, and jurisdiction, but generally, to be classified as a trade secret, information must:

• Be secret: The information is not generally known to the relevant portion of the public.
• Confer some sort of economic benefit on its holder: The idea or information must give its owner an advantage over its competitors. The benefit conferred from the information, however, must stem from not being generally known, not just from the value of the information itself. The best test for determining what is confidential information is to determine whether the information would provide an advantage to the competition.
• Be the subject of reasonable efforts to maintain its secrecy: The owner must take reasonable steps to protect its trade secrets from disclosure. That is, a piece of information will not receive protection as a trade secret if the owner does not take adequate steps to protect it from disclosure.

Cary presented in-depth information on the various types of threats to data security including:

–Insiders
–Hackers
–Competitors
–Organized criminal groups
–Government-sponsored groups

Protecting proprietary information is a timely issue, but it is difficult. The event presented a list of common challenges faced when protecting information assets:

–Proprietary information is among the most valuable commodities, and attackers are doing everything in their power to steal as much of this information as possible.
–The risk of data breaches for organizations is high.
–New and emerging technologies create new risks and vulnerabilities.
— IT environments are becoming increasingly complex, making the management of them more expensive, difficult, and time consuming.
–There is a wider range of devices and access points, so businesses must proactively seek ways to combat the effects of this complexity.
–The rise in portable devices is creating more opportunities for data to “leak” from the business.
–The rise in Bring Your Own Device (BYOD) initiatives is generating new operational challenges and security problems.
–The rapidly expanding Internet of Things (IoT) has significantly increased the number of network connected things (e.g., HVAC systems, MRI machines, coffeemakers) that pose data security threats, many of which were inconceivable only a short time ago.
–The number of threats to corporate IT systems is on the rise.
–Malware is becoming more sophisticated.
–There is an increasing number of laws in this area, making information security an urgent priority.

Cary covered the entire gamut of challenges related to cyber fraud and data breaches ranging from legal issues, corporate espionage, social engineering, the use of social media, the bring-your-own-devices phenomenon, and the impact of cloud computing. The remaining portion of the event was devoted to addressing how enterprises can effectively respond when confronted by the challenges posed by these issues including breach response team building and breach prevention techniques like conducting security risk assessments, staff awareness training and the incident response plan.

When an organization experiences a data breach, management must respond in an appropriate and timely manner. During the initial response, time is critical. To help ensure that an organization responds to data breaches timely and efficiently, management should have an incident response plan in place that outlines how to respond to such issues. Timely responses can help prevent further data loss, fines, and customer backlash. An incident response plan outlines the actions an organization will take when data breaches occur. More specifically, a response plan should guide the necessary action when a data breach is reported or identified. Because every breach is different, a response plan should not outline how an organization should respond in every instance. Instead, a response plan should help the organization manage its response and create an environment to minimize risk and maximize the potential for success. In short, a response plan should describe the plan fundamentals that the organization can deploy on short notice.

Again, our sincere thanks go out to all involved in the success of this most worthwhile training event!

The Threat Within

Our Chapter’s May 16th and 17th upcoming training seminar on CYBER FRAUD AND DATA BREACHES emphasizes that corporate insiders represent one of the largest threats to an organization’s vital information resources. Insiders are individuals with access or inside knowledge about an organization, and such access or knowledge gives them the ability to exploit that organization’s vulnerabilities.  Insiders enjoy two critical openings in the security structure that put them in a position to exploit organizations’ information security vulnerabilities:

• the trust of their employers
• their access to facilities

Information theft by insiders is of special concern when employees leave an organization. Often, employees leave one organization for another, taking with them the knowledge of how their former organization operates, as well as its pricing policies, manufacturing methods, customers, and so on.

The ACFE tells us that insiders can be classified into three categories:

• Employees:  employee insiders are employees with rights and access associated with being employed by the organization.
• Associates: insider associates are people with physical access to an organization’s facilities, but they are not employees of the organization (e.g., contractors, cleaning crews).
• Affiliates: insider affiliates are individuals connected to pure insiders or insider associates (e.g., spouse, friend, client), and they can use the credentials of those insiders with whom they are connected to gain access to an organization’s systems or facilities.

There are many types of potential insider threats, and they can be organized into the following categories:

• Traitors
• Zealots
• Spies
• Browsers
• Well-intentioned insiders

A traitor is a legitimate insider who misuses his or her insider credentials to facilitate malicious acts.  When a trusted insider misuses his or her privileges to violate a security policy, s/he becomes a traitor. Below are some signs that an insider may be a traitor:

• Unusual change in work habits;
• Seeking out sensitive projects;
• Unusual work hours;
• Inconsistent security habits;
• Mocking security policies and procedures;
• Rationalizing inappropriate actions;
• Changes in lifestyle;
• Living beyond his or her means.

Zealots are trusted insiders with strong and uncompromising beliefs that clash with their organization’s perspectives on certain issues and subjects. Zealots pose a threat because they might exploit their access or inside knowledge to “reform” their organizations.
Zealots might attempt reform by:

• Exposing perceived shortcomings of the organization by making unauthorized disclosures of information to the public or by granting access to outsiders;
• Destroying information;
• Halting services or the production of products.

Zealots believe that their actions are just, no matter how much damage they cause.

A spy is an individual who is intentionally placed in a situation or organization to gather intelligence. A well-placed corporate spy can provide intelligence on a target organization’s product development, product launches, and organizational developments or changes.

Spies are common in foreign, business, and competitive intelligence efforts.

Browsers are insiders who are overly curious about information to or of which they do not need access, knowledge or possession to carry out their work duties. Their curiosity drives them to review data not intended for them.  Browsers might “browse” through information that they have no specific need to know until they find something interesting or something they can use. Browsers might use such information for personal gain, or they might use it for:

• Obtaining awards;
• Supporting decisions about promotions;
• Understanding contract negotiations;
• Gaining a personal advantage over their peers.

Browsers can be the hardest insider threat to identify, and they can be even harder to defeat.

The well-intentioned insider is an insider who, through ignorance or laziness, unintentionally fosters security breaches. Well-intentioned insiders might foster security breaches by:

• Disabling anti-virus software;
• Installing unapproved software;
• Leaving their workstations or facilities unlocked;
• Using easy-to-crack passwords;
• Failing to shred or destroy sensitive information.
While well-intentioned individuals might be stellar employees when it comes to work production, their ignorance or laziness regarding information security practices can be disastrous.

CFE’s need to understand that there are numerous motivations for insider attacks including:

• Work-related grievances;
• Financial gain;
• Challenge;
• Curiosity;
• Spying for competitors;
• Revenge;
• Ego;
• Opportunity;
• Ideology (e.g., “I don’t like the way my organization conducts business.”)

There are many ways our client organizations can combat insider threats. The most effective mitigation strategies recommended by the ACFE are:

• Create an insider threat program. To combat insider threats, management should form an insider threat team, create related policies, develop processes and implement controls, and regularly communicate those policies and controls across the organization.
• Work together across the organization. To be successful, efforts to combat insider threats should be communicated across the silos of management, IT, data owners, software engineers, general counsel, and human resources.
• Address employee privacy issues with general counsel. Because employees have certain privacy rights that can affect numerous aspects of the employer-employee relationship, and because such rights may stem from, and be protected by, various elements of the law, management should consult legal counsel whenever addressing actions impacting employee privacy.
• Pay close attention at times of resignation/ termination. Because leaving an organization is a key time of concern for insider threats, management should be cautious of underperforming employees, employees at risk of being terminated, and of employees who will likely resign.
• Educate managers regarding potential recruitment. Management should train subordinates to exercise due diligence in hiring prospective employees.
• Recognize concerning behaviors as a potential indicator. Management must train managers and all employees to recognize certain behaviors or characteristics that might indicate employees are committing or are at risk of committing a breach. Common behavioral red flags are living beyond one’s financial means, experiencing financial difficulties, having an uncommonly close relationship with vendors or customers, and demonstrating excessive control over their job responsibilities.
• Mitigate threats from trusted business partners. Management should subject their organization’s contractors and outsourced organizations to the same security controls, policies, and procedures to which they subject their own employees.
• Use current technologies differently. Most organizations have implemented technologies to detect network intrusions and other threats originating outside the network perimeter, and organizations with such technologies should use them to the extent possible to detect potential indicators of malicious insider behavior within the network.
• Focus on protecting the most valuable assets. Management should dedicate the most effort to securing its most valuable organizational assets and intellectual property against insider threats.
• Learn from past incidents. Past incidents of insider threats and abuse will suggest areas of vulnerability that insiders will likely exploit again.
Additionally:
• Focus on deterrence, not detection. In other words, create a culture that deters any aberrant behavior so that those who continue to practice that behavior stand out from the “noise” of normal business; focus limited investigative resources on those individuals.
• Know your people—know who your weak links are and who would be most likely to be a threat. Use human resources data to narrow down threats rather than looking for a single needle in a pile of needles.
• Identify information that is most likely to be valuable to someone else and protect it to a greater degree than the rest of your information.
• Monitor ingress and egress points for information (e.g., USB ports, printers, network boundaries).
• Baseline normal activity and look for anomalies.
Other measures organizations might consider taking to combat insider threats include:
• Educate employees as to what information is proprietary and confidential.
• Require that all employees and third-party vendors and contractors sign nondisclosure agreements; written agreements providing that all proprietary and confidential information learned during their relationship must be kept confidential and must not be disclosed to anyone, upon the commencement and termination of employment or contracts.
• Ensure that all an organization’s third-party vendors and contractors perform background checks on all third-party employees who will have access to the organization’s information systems.
• Prohibit employees, contractors, and trusted business partners from printing sensitive documents that are not required for business purposes.
• If possible, avoid connecting information systems to those of business partners.

Also, when possible, management should conduct exit interviews with departing employees. During an exit interview, the departing employee should be advised about the organization’s trade secrets and confidential information, as well as any obligation not to disclose or use such information for his or her own benefit or for the benefit of others without express written consent. Also, the employee should be given a form to sign stating that s/he was informed that any proprietary information should not be disclosed and that s/he agrees not to disclose any such information without consent.

Finally, when management terminates its relationship with an insider, it should immediately deactivate the insider’s access to company tools and resources.

Please consider joining us for at our May 16th and 17th Spring training event, Cyber Fraud and Data Breaches for 16 CPE credits!  You may register and pay on-line here.