Category Archives: Computer Security

Regulating the Financial Data Breach

During several years of my early career, I was employed as a Manager of Operations Research by a mid-sized bank holding company. My small staff and I would endlessly discuss issues related to fraud prevention and develop techniques to keep our customer’s checking and savings accounts safe, secure and private. A never ending battle!

It was a simpler time back then technically but since a large proportion of fraud committed against banks and financial institutions today still involves the illegal use of stolen customer or bank data, some of the newest and most important laws and regulations that management assurance professionals, like CFEs, must be aware of in our practice, and with which our client banks must comply, relate to the safeguarding of confidential data both from internal theft and from breaches of the bank’s information security defenses by outside criminals.

As the ACFE tells us, there is no silver bullet for fully protecting any organization from the ever growing threat of information theft. Yet full implementation of the measures specified by required provisions of now in place federal banking regulators can at least lower the risk of a costly breach occurring. This is particularly true since the size of recent data breaches across all industries have forced Federal enforcement agencies to become increasingly active in monitoring compliance with the critical rules governing the safeguarding of customer credit card data, bank account information, Social Security numbers, and other personal identifying information. Among these key rules are the Federal Reserve Board’s Interagency Guidelines Establishing Information Security Standards, which define customer information as any record containing nonpublic personal information about an individual who has obtained a financial product or service from an institution that is to be used primarily for personal, family, or household purposes and who has an ongoing relationship with the institution.

Its important to realize that, under the Interagency Guidelines, customer information refers not only to information pertaining to people who do business with the bank (i.e., consumers); it also encompasses, for example, information about (1) an individual who applies for but does not obtain a loan; (2) an individual who guarantees a loan; (3) an employee; or (4) a prospective employee. A financial institution must also require, by contract, its own service providers who have access to consumer information to develop appropriate measures for the proper disposal of the information.

The FRB’s Guidelines are to a large extent drawn from the information protection provisions of the Gramm Leach Bliley Act (GLBA) of 1999, which repealed the Depression-era Glass-Steagall Act that substantially restricted banking activities. However, GLBA is best known for its formalization of legal standards for the protection of private customer information and for rules and requirements for organizations to safeguard such information. Since its enactment, numerous additional rules and standards have been put into place to fine-tune the measures that banks and other organizations must take to protect consumers from the identity-related crimes to which information theft inevitably leads.

Among GLBA’s most important information security provisions affecting financial institutions is the so-called Financial Privacy Rule. It requires banks to provide consumers with a privacy notice at the time the consumer relationship is established and every year thereafter.

The notice must provide details collected about the consumer, where that information is shared, how that information is used, and how it is protected. Each time the privacy notice is renewed, the consumer must be given the choice to opt out of the organization’s right to share the information with third-party entities. That means that if bank customers do not want their information sold to another company, which will in all likelihood use it for marketing purposes, they must indicate that preference to the financial institution.

CFEs should note , that most pro-privacy advocacy groups strongly object to this and other privacy related elements of GLBA because, in their view, these provisions do not provide substantive protection of consumer privacy. One major advocacy group has stated that GLBA does not protect consumers because it unfairly places the burden on the individual to protect privacy with an opt-out standard. By placing the burden on the customer to protect his or her data, GLBA weakens customer power to control their financial information. The agreement’s opt-out provisions do not require institutions to provide a standard of protection for their customers regardless of whether they opt-out of the agreement. This provision is based on the assumption that financial companies will share information unless expressly told not to do so by their customers and, if customers neglect to respond, it gives institutions the freedom to disclose customer nonpublic personal information.

CFEs need to be aware, however, that for bank clients, regardless of how effective, or not, GLBA may be in protecting customer information, noncompliance with the Act itself is not an option. Because of the current explosion in breaches of bank information security systems, the privacy issue has to some degree been overshadowed by the urgency to physically protect customer data; for that reason, compliance with the Interagency Guidelines concerning information security is more critical than ever. The basic elements partially overlap with the preventive measures against internal bank employee abuse of the bank’s computer systems. However, they go quite a bit further by requiring banks to:

—Design an information security program to control the risks identified through a security risk assessment, commensurate with the sensitivity of the information and the complexity and scope of its activities.
—Evaluate a variety of policies, procedures, and technical controls and adopt those measures that are found to most effectively minimize the identified risks.
—Application and enforcement of access controls on customer information systems, including controls to authenticate and permit access only to authorized individuals and to prevent employees from providing customer information to unauthorized individuals who may seek to obtain this information through fraudulent means.
—Access restrictions at physical locations containing customer information, such as buildings, computer facilities, and records storage facilities to permit access only to authorized individuals.
—Encryption of electronic customer information, including while in transit or in storage on networks or systems to which unauthorized individuals may gain access.
—Procedures designed to ensure that customer information system modifications are consistent with the institution’s information security program.
—Dual control procedures, segregation of duties, and employee background checks for employees with responsibilities for or access to customer information.
—Monitoring systems and procedures to detect actual and attempted attacks on or intrusions into customer information systems.
—Response programs that specify actions to be taken when the institution suspects or detects that unauthorized individuals have gained access to customer information systems, including appropriate reports to regulatory and law enforcement agencies.
—Measures to protect against destruction, loss, or damage of customer information due to potential environmental hazards, such as fire and water damage or technological failures.

The Interagency Guidelines require a financial institution to determine whether to adopt controls to authenticate and permit only authorized individuals access to certain forms of customer information. Under this control, a financial institution also should consider the need for a firewall to safeguard confidential electronic records. If the institution maintains Internet or other external connectivity, its systems may require multiple firewalls with adequate capacity, proper placement, and appropriate configurations.

Similarly, the institution must consider whether its risk assessment warrants encryption of electronic customer information. If it does, the institution must adopt necessary encryption measures that protect information in transit, in storage, or both. The Interagency Guidelines do not impose specific authentication or encryption standards, so it is advisable for CFEs to consult outside experts on the technical details applicable to your client institution’s security requirements especially when conducting after the fact fraud examinations.

The financial institution also must consider the use of an intrusion detection system to alert it to attacks on computer systems that store customer information. In assessing the need for such a system, the institution should evaluate the ability, or lack thereof, of its staff to rapidly and accurately identify an intrusion. It also should assess the damage that could occur between the time an intrusion occurs and the time the intrusion is recognized and action is taken.

The regulatory agencies have also provided our clients with requirements for responding to information breaches. These are contained in a related document entitled Interagency Guidance on Response Programs for Unauthorized Access to Customer Information and Customer Notice (Incident Response Guidance). According to the Incident Response Guidance, a financial institution should develop and implement a response program as part of its information security program. The response program should address unauthorized access to or use of customer information that could result in substantial harm or inconvenience to a customer.

Finally, the Interagency Guidelines require financial institutions to train staff to prepare and implement their information security programs. The institution should consider providing specialized training to ensure that personnel sufficiently protect customer information in accordance with its information security program.

For example, an institution should:

—Train staff to recognize and respond to schemes to commit fraud or identity theft, such as guarding against pretext spam calling.
—Provide staff members responsible for building or maintaining computer systems and local and wide area networks with adequate training, including instruction about computer security.
—Train staff to properly dispose of customer information.

Navigating the Cloud

I’ve read several articles in the trade press recently that indicate CFEs are finding some aspects of fraud investigations involving cloud based data to be especially challenging. This is a consequent follow-on of the uncontested fact that, for many organizations, cloud based computing does improve performance and dramatically reduces a wide range of IT and administrative costs.

Commissioning a cloud service provider can enable an organization to off-load much of the difficulty that comes with implementing, maintaining, and physically protecting the systems required for company operations. The organization no longer needs to employ such a large team of network engineers, database administrators, developers, and other technical staff. Instead, it can use smaller, in-house teams to maintain the cloud solution and keep everything running as anticipated. Moving to the cloud also can introduce new capabilities, such as the ability to add and remove servers based on seasonal demand, an option that would be impractical for a traditional data center.

Now that cloud computing has become a mainstream service, CFEs and forensic accountants are increasingly called upon to assess the cloud environment with an eye to devising innovative approaches to cope with the unique investigative features and risks these services pose while at the same time grappling with the effects on their examinations of the security, reliability and availability of critical data housed by their client’s outside IT provider. Based on this assessment, CFEs can advise their client organizations in how best to meet the new investigative challenges when the inevitable cloud involved fraud strikes.

The cloud encompasses application service providers, cloud infrastructure, and the virtual placement of a server, set of servers, or other set of computing power in an environment that is shared among many entities and organizations. Cloud platforms and servers extend and supplement an organization’s own servers, resulting in multiple options for computing and application hosting. It is not sufficient to think of cloud platform and infrastructure oversight as mere vendor management.  Fraud examinations involving these environments are more complex, because of several factors about which the investigative team needs to make decisions  when determining the structure of the examination.

The ACFE tells us that a cloud deployment can be just as variable in structure and architecture as a traditional IT implementation. Among the numerous cloud platforms confronting the CFE, the most common are infrastructure as a service, software as a service, and platform as a service. The employment of these three options alone makes a wide variety of models and other options available. Each of these options additionally poses a distinct set of fraud risks and preventative controls, depending on a client organization’s specific deployment of a particular cloud platform and infrastructure.

Many challenges and barriers to an unfettered examination can appear when the CFEs client organization has contracted with a cloud provider who is, in actual form, a third-party vendor. In some cases, reviewing the cloud service provider’s processes and infrastructure might not be allowed by contract. In its place, the vendor may offer attestation reports such as the American Institute of Certified Public Accountants’ (AICPA’s) Statement on Standards for Attestation Engagements No. 16 (SSAE 16) as evidence of organizational controls. In other cases, the provider might restrict the examination to a select portion of the service which can be problematic when the CFE is working to obtain an overview of a complex fraud. Further, providers often require the client to obtain specific approvals before any fraud examination activities can even begin. Ideally, client organizations should take these types of consideration into account before contracting with a cloud vendor, but such consideration is, for the most part, not realistic unless a client organization has historically experienced a large number of frauds.  Fraud is, most often, not usually the first thing on many client’s minds when initially contracting with a cloud service provider.

One of the most difficult aspects of the fraud examination of a cloud infrastructure deployment is determining which fraud prevention controls are currently managed by the client organization and which by the cloud provider. With many cloud deployments, few controls are the actual responsibility of the provider. For example, the CFEs client may be responsible for configuration management, patch management, and access management, while the provider is only responsible for physical and environmental security.

A client organization’s physical assets are tangible. The organization buys a physical piece of equipment and keeps a record of this asset; a CFE can see all the organization’s technology assets just by walking through the data center. Cloud infrastructure deployments, however, are virtual, and it’s easy to add and remove these systems. Many organizations base their models on servers and systems that are there one day and gone the next. IT departments themselves also struggle with managing cloud assets, and tools to help cloud providers and clients are continually evolving. As a result, from the CFEs perspective, the examination scope can be hard to manage and execute.  The CFE is also confronted with the fact that, because cloud computing is a relatively recent and fast-growing technology service, a client organization’s employees themselves may not possess much cloud expertise. This scarcity creates risks to the CFEs examination because IT administrators often aren’t positioned to fully explain the details of the cloud deployment and structure so critical details bearing on the fraud under investigation may not be adequately documented. Also, migrating from facilities that are operating internally to cloud-based services can dramatically alter the fraud risk profile of any organization. For example, when an organization moves to a cloud based service, in most cases, all its data is stored on the same physical equipment where other organizations’ data is housed. If configured inappropriately, data leaks can result.

Interacting with the client organization’s IT and management is the CFEs first step toward understanding how the organization’s cloud strategy is or is not related to the circumstances of the fraud under investigation. How did the organization originally expect to use the cloud and how is it using it in actual practice? What are the benefits and drawbacks of using it the way it uses it? What is the scope, from a fraud prevention and security perspective, of the organization’s cloud deployment? The lack of a cohesive, formal, and well-aligned cloud infrastructure strategy should be a red flag for the CFE as a possible contributing factor in any fraud involving cloud computing services.

The second step is CFE review of the client’s security program (or lack thereof) itself.  IT departments and business units should ideally have a cloud security strategy available for CFE review. Such a strategy includes determining the type of data permissible to store in the cloud and how its security will be enforced. It also includes the integration of the information security program into the cloud. All the usual IT risks of traditional data centers apply to cloud deployment as well, among them, malware propagation, denial of service attacks, data breaches, and identity theft, all of which, depending on the implementation, can fall on either party to the contract.  Professionals who have received training in cloud computing may or may not be able to adapt traditional IT programs for fraud examination of servers in physical form to a cloud environment.

There is good news for the examining CFE, however. Cloud infrastructure brings with it myriad security technologies useful to the CFE in conducting his or her examination that are not affordable in most traditional deployments from real-time, chronological reports on suspect activities related to identity and access management systems, to network segmentation, and multifactor authentication.

In summary, CFEs and forensic accountants should not approach a cloud involved engagement in the same way they approach other fraud examinations involving third-party vendors. Cloud engagements present their own complexities, which CFEs should attempt to understand and assess adequately. SSAE 16 and other attestation reports based on audit and attestation standards can be valuable as informational background to examination of a fraud involving cloud services.  CFEs can help as a profession by reinforcing client community understanding that a correctly implemented cloud infrastructure can reduce a client organization’s residual risk of fraud by offloading a portion of the responsibility for managing IT risks to a cloud service provider. CFEs have a valuable opportunity to see that their client organizations benefit from the cloud while adequately addressing the new fraud risks that are introduced when their clients contract with a service provider and move IT operations to the cloud. Applying the same level of rigor to examinations involving cloud technology that they apply to technology managed in-house creates an environment in which the CFE and forensic accounting professions can be primary advocates for strong cloud strategy implemented within the structure of the client organization’s fraud prevention program.

Cyberfraud & Business Continuity

We received an e-mail inquiry from a follower of our Chapter’s LinkedIn page last week asking specifically about recovery following a cyberfraud penetration and, in general, about disaster planning for smaller financial institutions. It’s a truism that with virtually every type of business process and customer moving away from brick-and-mortar places of business to cloud supported business transactions and communication, every such organization faces an exponential increase in the threat of viruses, bots, phishing attacks, identity theft, and a whole host of other cyberfraud intrusion risks.  All these threats illustrate why a post-intrusion continuity plan should be at or near the top of any organization’s risk assessment, yet many of our smaller clients especially remain stymied by what they feel are the costs and implementational complexity of developing such a plan. Although management understands that it should have a plan, many say, “we’ll have to get to that next year”, yet it never seems to happen.

Downtime due to unexpected penetrations, breeches and disasters of all kinds not only affect our client businesses individually, but can also affect the local, regional, or worldwide economy if the business is sufficiently large or critical. Organizations like Equifax do not operate in a vacuum; they are held accountable by customers, vendors, and owners to operate as expected. Moreover, the extent of the impact on a business depends on the products or services it offers. Having an updated, comprehensive, and tested general continuity plan can help organizations mitigate operational losses in the event of any disaster or major disruption. Whether it’s advising the organization about cyberfraud in general or reviewing the different elements of a continuity plan for fraud impact, the CFE can proactively assist the client organization on the front end in getting a cyberfraud-recovery continuity plan in place and then in ensuring its efficient operation on the back end.

Specifically, regarding the impact of cyberfraud, the ACFE tells us that, until relatively recently, many organizations reported not having directly addressed it in their formal business continuity plans. Some may have had limited plans that addressed only a few financial fraud-related scenarios, such as employee embezzlement or supplier billing fraud, but hadn’t equipped general employees to deal with even the most elemental impacts of cyberfraud.   However, as these threats increasingly loomed, and as their on-line business expanded, more organizations have committed themselves to the process of formally addressing them.

An overall business continuity plan, including targeted elements to address cyberfraud, isn’t a short-term project, but rather an ongoing set of procedures and control definitions that must evolve along with the organization and its environment. It’s an action plan, complete with the tools and resources needed to continue those critical business processes necessary to keep the entity operating after a cyber disruption. Before advising our clients to embark on such a business continuity plan project, we need to make them aware that there is a wealth of documentation available that they can review to help in their planning and execution effort. An example of such documentation is one written for the industry of our Chapter’s inquirer, banking; the U.S. Federal Financial Institutions Examination Council’s (FFIEC’s) Business Continuity Planning Handbook. And there are other such guides available on-line to orient the continuity process for entities in virtually every other major business sector.  While banks are held to a high standard of preparedness, and are subject to regular bank examination, all types of organizations can profit from use of the detailed outline the FFIEC handbook provides as input to develop their own plans. The publication encourages organizations of all sizes to adopt a process-oriented approach to continuity planning that involves business impact analysis as well as fraud risk assessment, management, and monitoring.

An effective plan begins with client commitment from the top. Senior management and the board of directors are responsible for managing and controlling risk; plan effectiveness depends on management’s willingness to commit to the process from start to finish. Working as part of the implementation team, CFEs can make sure both the audit committee and senior management understand this commitment and realize that business disruption from cyber-attack represents an elevated risk to the organization that merits senior-level attention. The goal of this analysis is to identify the impact of cyber threats and related events on all the client organizations’ business processes. Critical needs are assessed for all functions, processes, and personnel, including specialized equipment requirements, outsourced relationships and dependencies, alternate site needs, staff cross-training, and staff support such as specialized training and guidance from human resources regarding related personnel issues. As participants in this process, CFEs acting proactively are uniquely qualified to assist management in the identification of different cyberfraud threats and their potential impacts on the organization.

Risk assessment helps gauge whether planned cyberfraud-related continuity efforts will be successful. Business processes and impact assumptions should be stress tested during this phase. Risks related to protecting customer and financial information, complying with regulatory guidelines, selecting new systems to support the business, managing vendors, and maintaining secure IT should all be considered. By focusing on a single type of potential cyber threat’s impact on the business, our client organizations can develop realistic scenarios of related threats that may disrupt the cyber-targeted processes.  At the risk assessment stage, organization should perform a gap analysis to compare what actions are needed to recover normal operations versus those required for a major business interruption. This analysis highlights cyber exposures that the organization will need to address in developing its recovery plan. Clients should also consider conducting another gap analysis to compare what is present in their proposed or existing continuity plan with what is outlined (in the case of a bank) in the recommendations presented in the FFIEC handbook. This is an excellent way to assess needs and compliance with these and/or the guidelines available for other industries. Here too, CFEs can provide value by employing their skills in fraud risk assessment to assist the organization in its identification of the most relevant cyber risks.

After analyzing the business impact analysis and risk assessment, the organization should devise a strategy to mitigate the risks of business interruption from cyberfraud. This becomes the plan itself, a catalog of steps and checklists, which includes team members and their roles for recovery, to initiate action following a cyber penetration event. The plan should go beyond technical issues to also include processes such as identifying a lead team, creating lists of emergency contacts, developing calling trees, listing manual procedures, considering alternate locations, and outlining procedures for dealing with public relations.  As members of the team CFEs, can work with management throughout response plan creation and installation, consulting on plan creation, while advising management on areas to consider and ensuring that fraud related risks are transparently defined and addressed.

Testing is critical to confirm cyber fraud contingency plans. Testing objectives should start small, with methods such as walkthroughs, and increase to eventually encompass tabletop exercises and full enterprise wide testing. The plan should be reviewed and updated for any changes in personnel, policies, operations, and technology. CFEs can provide management with a fraud-aware review of the plan and how it operates, but their involvement should not replace management’s participation in testing the actual plan. If the staff who may have to execute the plan have never touched it, they are setting themselves up for failure.

Once the plan is created and tested, maintaining it becomes the most challenging activity and is vital to success in today’s ever-evolving universe of cyber threats. Therefore, concurrent updating of the plan in the face of new and emerging threats is critical.

In summary, cyberfraud-threat continuity planning is an ongoing process for all types of internet dependent organizations that must remain flexible as daily threats change and migrate. The plan is a “living” document. The IT departments of organizations are challenged with identifying and including the necessary elements unique to their processes and environment on a continuous basis. Equally important, client management must oversee update of the plan on a concurrent basis as the business grows and introduces new on-line dependent products and services. CFEs can assist by ensuring that their client organizations keep cyberfraud related continuity planning at the top of mind by conducting periodic reviews of the basic plan and by reporting on the effectiveness of its testing.

From Inside the Building

By Rumbi Petrozzello, CFE, CPA/CFF
2017 Vice-President – Central Virginia Chapter ACFE

Several months ago, I attended an ACFE session where one of the speakers had worked on the investigation of Edward Snowden. He shared that one of the ways Snowden had gained access to some of the National Security Agency (NSA) data that he downloaded was through the inadvertent assistance of his supervisor. According to this investigator, Snowden’s supervisor shared his password with Snowden, giving Snowden access to information that was beyond his subordinate’s level of authorization. In addition to this, when those security personnel reviewing downloads made by employees noticed that Snowden was downloading copious amounts of data, they approached Snowden’s supervisor to question why this might be the case. The supervisor, while acknowledging this to be true, stated that Snowden wasn’t really doing anything untoward.

At another ACFE session, a speaker shared information with us about how Chelsea Manning was able to download and remove data from a secure government facility. Manning would come to work, wearing headphones, listening to music on a Discman. Security would hear the music blasting and scan the CDs. Day after day, it was the same scenario. Manning showed up to work, music blaring.  Security staff grew so accustomed to Manning, the Discman and her CDs that when she came to work though security with a blank CD boldly labelled “LADY GAGA”, security didn’t blink. They should have because it was that CD and ones like it that she later carried home from work that contained the data she eventually shared with WikiLeaks.

Both these high-profile disasters are notable examples of the bad outcome arising from a realized internal threat. Both Snowden and Manning worked for organizations that had, and have, more rigorous security procedures and policies in place than most entities. Yet, both Snowden and Manning did not need to perform any magic tricks to sneak data out of the secure sites where the target data was held; it seems that it all it took was audacity on the one side and trust and complacency on the other.

When organizations deal with outside parties, such as vendors and customers, they tend to spend a lot of time setting up the structures and systems that will guide how the organization will interact with those vendors and customers. Generally, companies will take these systems of control seriously, if only because of the problems they will have to deal with during annual external audits if they don’t. The typical new employee will spend a lot of time learning what the steps are from the point when a customer places an order through to the point the customer’s payment is received. There will be countless training manuals to which to refer and many a reminder from co-workers who may be negatively impacted if the rooky screws up.

However, this scenario tends not to hold up when it comes to how employees typically share information and interact with each other. This is true despite the elevated risk that a rogue insider represents. Often, when we think about an insider causing harm to a company through fraudulent acts, we tend to imagine a villain, someone we could identify easily because s/he is obviously a terrible person. After all, only a terrible person could defraud their employer. In fact, as the ACFE tells us, the most successful fraudsters are the ones who gain our trust and who, therefore, don’t really have to do too much for us to hand over the keys to the kingdom. As CFEs and Forensic Accountants, we need to help those we work with understand the risks that an insider threat can represent and how to mitigate that risk. It’s important, in advising our clients, to guide them toward the creation of preventative systems of policy and procedure that they sometimes tend to view as too onerous for their employees. Excuses I often hear run along the lines of:

• “Our employees are like family here, we don’t need to have all these rules and regulations”

• “I keep a close eye on things, so I don’t have to worry about all that”

• “My staff knows what they are supposed to do; don’t worry about it.”

Now, if people can easily walk sensitive information out of locations that have documented systems and are known to be high security operations, can you imagine what they can do at your client organizations? Especially if the employer is assuming that their employees magically know what they are supposed to do? This is the point that we should be driving home with our clients. We should look to address the fact that both trust and complacency in organizations can be problems as well as assets. It’s great to be able to trust employees, but we should also talk to our clients about the fraud triangle and how one aspect of it, pressure, can happen to any staff member, even the most trusted. With that in mind, it’s important to institute controls so that, should pressure arise with an employee, there will be little opportunity open to that employee to act. Both Manning and Snowden have publicly spoken about the pressures they felt that led them to act in the way they did. The reason we even know about them today is that they had the opportunity to act on those pressures. I’ve spent time consulting with large organizations, often for months at a time. During those times, I got to chat with many members of staff, including security. On a couple of occasions, I forgot and left my building pass at home. Even though I was on a first name basis with the security staff and had spent time chatting with them about our personal lives, they still asked me for identification and looked me up in the system. I’m sure they thought I was a nice and trustworthy enough person, but they knew to follow procedures and always checked on whether I was still authorized to access the building. The important point is that they, despite knowing me, knew to check and followed through.

Examples of controls employees should be reminded to follow are:

• Don’t share your password with a fellow employee. If that employee cannot access certain information with their own password, either they are not authorized to access that information or they should speak with an administrator to gain the desired access. Sharing a password seems like a quick and easy solution when under time pressures at work, but remind employees that when they share their login information, anything that goes awry will be attributed to them.

• Always follow procedures. Someone looking for an opportunity only needs one.

• When something looks amiss, thoroughly investigate it. Even if someone tells you that all is well, verify that this is indeed the case.

• Explain to staff and management why a specific control is in place and why it’s important. If they understand why they are doing something, they are more likely to see the control as useful and to apply it.

• Schedule training on a regular basis to remind staff of the controls in place and the systems they are to follow. You may believe that staff knows what they are supposed to do, but reminding them reduces the risk of them relying on hearsay and secondhand information. Management is often surprised by what they think staff knows and what they find out the staff really knows.

It should be clear to your clients that they have control over who has access to sensitive information and when and how it leaves their control. It doesn’t take much for an insider to gain access to this information. A face you see smiling at you daily is the face of a person you can grow comfortable with and with whom you can drop your guard. However, if you already have an adequate system and effective controls in place, you take the personal out of the equation and everyone understands that we are all just doing our job.

Sock Puppets

The issue of falsely claimed identity in all its myriad forms has shadowed the Internet since the beginning of the medium.  Anyone who has used an on-line dating or auction site is all too familiar with the problem; anyone can claim to be anyone.  Likewise, confidence games, on or off-line, involve a range of fraudulent conduct committed by professional con artists against unsuspecting victims. The victims can be organizations, but more commonly are individuals. Con artists have classically acted alone, but now, especially on the Internet, they usually group together in criminal organizations for increasingly complex criminal endeavors. Con artists are skilled marketers who can develop effective marketing strategies, which include a target audience and an appropriate marketing plan: crafting promotions, product, price, and place to lure their victims. Victimization is achieved when this marketing strategy is successful. And falsely claimed identities are always an integral component of such schemes, especially those carried out on-line.

Such marketing strategies generally involve a specific target market, which is usually made up of affinity groups consisting of individuals grouped around an objective, bond, or association like Facebook or LinkedIn Group users. Affinity groups may, therefore, include those associated through age, gender, religion, social status, geographic location, business or industry, hobbies or activities, or professional status. Perpetrators gain their victims’ trust by affiliating themselves with these groups.  Historically, various mediums of communication have been initially used to lure the victim. In most cases, today’s fraudulent schemes begin with an offer or invitation to connect through the Internet or social network, but the invitation can come by mail, telephone, newspapers and magazines, television, radio, or door-to-door channels.

Once the mark receives and accepts the offer to connect, some sort of response or acceptance is requested. The response will typically include (in the case of Facebook or LinkedIn) clicking on a link included in a fraudulent follow-up post to visit a specified web site or to call a toll-free number.

According to one of Facebook’s own annual reports, up to 11.2 percent of its accounts are fake. Considering the world’s largest social media company has 1.3 billion users, that means up to 140 million Facebook accounts are fraudulent; these users simply don’t exist. With 140 million inhabitants, the fake population of Facebook would be the tenth-largest country in the world. Just as Nielsen ratings on television sets determine different advertising rates for one television program versus another, on-line ad sales are determined by how many eyeballs a Web site or social media service can command.

Let’s say a shyster want 3,000 followers on Twitter to boost the credibility of her scheme? They can be hers for $5. Let’s say she wants 10,000 satisfied customers on Facebook for the same reason? No problem, she can buy them on several websites for around $1,500. A million new friends on Instagram can be had for only $3,700. Whether the con man wants favorites, likes, retweets, up votes, or page views, all are for sale on Web sites like Swenzy, Fiverr, and Craigslist. These fraudulent social media accounts can then be freely used to falsely endorse a product, service, or company, all for just a small fee. Most of the work of fake account set up is carried out in the developing world, in places such as India and Bangladesh, where actual humans may control the accounts. In other locales, such as Russia, Ukraine, and Romania, the entire process has been scripted by computer bots, programs that will carry out pre-encoded automated instructions, such as “click the Like button,” repeatedly, each time using a different fake persona.

Just as horror movie shape-shifters can physically transform themselves from one being into another, these modern screen shifters have their own magical powers, and organizations of men are eager to employ them, studying their techniques and deploying them against easy marks for massive profit. In fact, many of these clicks are done for the purposes of “click fraud.” Businesses pay companies such as Facebook and Google every time a potential customer clicks on one of the ubiquitous banner ads or links online, but organized crime groups have figured out how to game the system to drive profits their way via so-called ad networks, which capitalize on all those extra clicks.

Painfully aware of this, social media companies have attempted to cut back on the number of fake profiles. As a result, thousands and thousands of identities have disappeared over night among the followers of many well know celebrities and popular websites. If Facebook has 140 million fake profiles, there is no way they could have been created manually one by one. The process of creation is called sock puppetry and is a reference to the children’s toy puppet created when a hand is inserted into a sock to bring the sock to life. In the online world, organized crime groups create sock puppets by combining computer scripting, web automation, and social networks to create legions of online personas. This can be done easily and cheaply enough to allow those with deceptive intentions to create hundreds of thousands of fake online citizens. One only needs to consult a readily available on-line directory of the most common names in any country or region. Have a scripted bot merely pick a first name and a last name, then choose a date of birth and let the bot sign up for a free e-mail account. Next, scrape on-line photo sites such as Picasa, Instagram, Facebook, Google, and Flickr to choose an age-appropriate image to represent your new sock puppet.

Armed with an e-mail address, name, date of birth, and photograph, you sign up your fake persona for an account on Facebook, LinkedIn, Twitter, or Instagram. As a last step, you teach your puppets how to talk by scripting them to reach out and send friend requests, repost other people’s tweets, and randomly like things they see Online. Your bots can even communicate and cross-post with one another. Before the fraudster knows it, s/he has thousands of sock puppets at his disposal for use as he sees fit. It is these armies of sock puppets that criminals use as key constituents in their phishing attacks, to fake on-line reviews, to trick users into downloading spyware, and to commit a wide variety of financial frauds, all based on misplaced and falsely claimed identity.

The fraudster’s environment has changed and is changing over time, from a face-to-face physical encounter to an anonymous on-line encounter in the comfort of the victim’s own home. While some consumers are unaware that a weapon is virtually right in front of them, others are victims who struggle with the balance of the many wonderful benefits offered by advanced technology and the painful effects of its consequences. The goal of law enforcement has not changed over the years; to block the roads and close the loopholes of perpetrators even as perpetrators continue to strive to find yet another avenue to commit fraud in an environment in which they can thrive. Today, the challenge for CFEs, law enforcement and government officials is to stay on the cutting edge of technology, which requires access to constantly updated resources and communication between organizations; the ability to gather information; and the capacity to identify and analyze trends, institute effective policies, and detect and deter fraud through restitution and prevention measures.

Now is the time for CFEs and other assurance professionals to continuously reevaluate all we for take for granted in the modern technical world and to increasingly question our ever growing dependence on the whole range of ubiquitous machines whose potential to facilitate fraud so few of our clients and the general public understand.

Raising the Drawbridge

One of our CFE Chapter members has had a request from her employer to assist an internal IT systems development team with fraud prevention controls during the systems development life cycle process of a new, web-based, payment application.  Evaluating and assessing the effectiveness of anti-fraud controls on the front end is much more efficient (and far less costly) than applying them on the back end on an emergency basis during or after a fraud investigation.  Our member asked us for a run down on the typical phases of a systems development project.

First off, in any systems development project the employment of a predefined set of “best practices” is generally viewed as having a positive impact on the overall quality of the system being developed. In the case of the systems development life cycle (SDLC), some generally accepted developmental practices can provide additional benefits to a CFE in terms of his or her proactive, fraud prevention control assessment. Specifically, throughout the eight steps of the SDLC, documentation is routinely created that provides valuable potential sources of control description for review. In other words, just employing generally accepted SDLC practice as prescribed in the CFE’s client’s industry is a powerful fraud prevention control in itself.

The first phase of the SDLC, system planning, is relatively straight-forward.  Executives and others evaluate the effectiveness of the proposed system in terms of meeting the entity’s mission and objectives. This process includes general guidelines for system selection and systems budgeting. Management develops a written long-term plan for the system that is strategic in nature. The plan will most probably change in a few months, but much evidence exists that such front-end planning pays dividends in terms of effective and well controlled IT solutions over the long term. CFEs can think of this phase of the life cycle as like IT governance, and the two are quite compatible. Thus, the first thing the CFE (or any auditor) would like to see is evidence of the implementation of general IT governance activities.  During this phase, several documents are typically generated. They include the long-term plan of development of the specific system within the context of the overall policies for selection of IT projects, and a short-term and long-term budget for the project, as well as a preliminary feasibility study and project authorization. Every project proposal should be documented in writing when originally submitted to management, and a master project schedule should exist that contains all the client’s approved developmental projects.  The presence of these documents illustrates a structured, formal approach to systems development within the client operation and, as such, evidences an effective planning system for IT projects and for systems in general. It also demonstrates a formal procedure for the approval of IT projects.  The CFE should add all the documents for this phase of the project under review to his or her work paper file and gather the same level of documentation for each of the subsequent SDLC cycles.

The systems analysis phase is the second in which IT professionals gather information requirements for the project. Facts and samples to be used in the IT project are gathered primarily from end users. A systems analyst or developer then processes the requirements, and produces a document that summarizes the analysis of the project.  The result is usually a systems analysis report. The systems analysis phase and its report should illustrate to the CFE the entity’s ability to be thorough in the application of its systems development process.

Phase three is the conceptual design phase. In phase two systems analysis, the requirements have been gathered and analyzed. Up to this point, the project is on paper and each of the future systems user groups will have a slightly different view of what it is and will be; this is totally normal and to be expected. At this point, a conceptual design view is developed that encompasses all the individual views. Although, a variety of possible documents could be among the total output of this phase, a data flow diagram (DFD), developed at a general level, is always the final, principal product of this phase.  For the CFE, the general DFD is evidence that the client is acting in accordance with a generally accepted SDLC framework.

Next comes phase four, systems evaluation and selection. Managers and IT staff choose among alternatives that satisfy the requirements developed in phases two and three, and meet the general guidelines and strategic policies of phase one. Part of the analysis of alternatives is to do a more exhaustive and detailed feasibility study, actually, several types of feasibility studies. A technical feasibility study examines whether the current IT infrastructure makes it feasible to implement a specific alternative. A legal feasibility study examines any legal ramifications of each alternative. An operational feasibility study determines if the current business processes, procedures and skills of employees are adequate to successfully implement the specific alternative. Last, a scheduling feasibility study relates to the firm’s ability to meet the proposed schedule for each alternative. Each of these should be combined into to a written feasibility report.

At the beginning of detail design, phase five, IT professionals have chosen the IT solution. The DFD design created in phase three is “fleshed out”; that is, details are developed and (hopefully) documented. Examples of some of the types of documentation that might be created include use cases, Unified Modeling Language (UML) diagrams, entity relationship diagrams (ERDs), relational models and normalized data diagrams.  IT professionals often do a walk-through of the software or system at this point to see if any defects in the system can be detected during development. The results of the walk-through should also be documented. To summarize this phase, a detailed design report should be written to explain the steps and procedures taken. It would also include the design documents referred to previously.

Phase six, programming and testing, includes current best practices like the use of object-oriented programs and procedures. No element of the SDLC is more important for CFEs than systems testing. Perhaps none of the phases has been more criticized than testing for being absent or performed at a substandard level. Sometimes management will try to reduce the costs of an IT project by cutting out or reducing the testing. Sound testing includes several key factors. The testing should be done offline before being implemented online. Individual modules should be tested, but even if a module passes the test, it should be tested in the enterprise system offline before being employed. That is, the modules should be tested as stand-alone and then, in conjunction with other applications, tested system wide. Test data and results should be kept, and end users should be involved in the testing.

Phase seven, implementation, represents system deployment.  The last step before deployment is a user acceptance sign-off. No system should be deployed without this acceptance. The user acceptance report should be included in the documentation. After deployment, however, the SDLC processes are not finished. One key step after implementation is to conduct a postimplementation review. This reviews the cost-benefit report, traces actual costs and benefits, and sees how accurate the projections were and if the project produces an adequate return.

The last and eighth phase is system maintenance.  The ACFE tells us that 80 percent of the costs and time spent on a software system, over its life cycle, occur after implementation. It is precisely for this reason that all of the previously mentioned SDLC documentation should be required. Obviously, the entity can leverage the 80 percent cost by providing excellent documentation. That is the place for the largest cost savings over the life of the system. It is also the argument against cutting corners during development by not documenting developmental steps and the system itself.

I’ll conclude by saying that by proactively consulting on fraud prevention controls and techniques during the SDLC, CFEs can verify that SDLC best practices are operating effectively by examining documentation to identify those major fraud related issues that should be addressed during the various phases. Of course, CFEs would certainly use other means of verification, such as inquiry and checklists as well, but the presence of proper SDLC documentation illustrates the level of application of the best practices in SDLC. Finally, a review of a sample of the documents will provide evidence that the entity is using SDLC best practices, which provides some assurance that systems are being developed efficiently and effectively so as to help raise the drawbridge on fraud.

Cyber-security – Is There a Role for Fraud Examiners?

cybersecurityAt a cybersecurity fraud prevention conference, I attended recently in California one of the featured speakers addressed the difference between information security and cybersecurity and the complexity of assessing the fraud preparedness controls specifically directed against cyber fraud.  It seems the main difficulty is the lack of a standard to serve as the basis of a fraud examiner’s or auditor’s risk review. The National Institute of Standards and Technology’s (NIST) framework has become a de facto standard despite the fact that it’s more than a little light on specific details.  Though it’s not a standard, there really is nothing else at present against which to measure cybersecurity.  Moreover, the technology that must be the subject of a cybersecurity risk assessment is poorly understood and is mutating rapidly.  CFE’s, and everyone else in the assurance community, are hard pressed to keep up.

To my way of thinking, a good place to start in all this confusion is for the practicing fraud examiner to consider the fundamental difference between information security and cybersecurity, the differing nature of the threat itself.   There is simply a distinction between protecting information against misuse of all sorts (information security) and an attack by a government, a terrorist group, or a criminal enterprise that has immense resources of expertise, personnel and time, all directed at subverting one individual organization (cybersecurity).  You can protect your car with a lock and insurance but those are not the tools of choice if you see a gang of thieves armed with bricks approaching your car at a stoplight. This distinction is at the very core of assessing an organization’s preparations for addressing the risk of cyberattacks and for defending itself against them.

As is true in so many investigations, the cybersecurity element of the fraud risk assessment process begins with the objectives of the review, which leads immediately on to the questions one chooses to ask. If an auditor only wants to know “Are we secure against cyberattacks?” then the answer should be up on a billboard in letters fifty feet high: No organization should ever consider itself safe against cyber attackers. They are too powerful and pervasive for any complacency. If major television networks can be stricken, if the largest banks can be hit, if governments are not immune, then the CFE’s client organization is not secure either.  Still, all anti-fraud reviewers can ask subtle and meaningful questions of client management, specifically focused on the data and software at risk of an attack. A fraud risk assessment process specific to cybersecurity might delve into the internals of database management systems and system software, requiring the considerable skills of a CFE supported by one or more tech-savvy consultants s/he has engaged to form the assessment team. Or it might call for just asking simple questions and applying basic arithmetic.

If the fraud examiner’s concern is the theft of valuable information, the simple corrective is to make the data valueless, which is usually achieved through encryption. The CFE’s question might be, “Of all your data, what percentage is encrypted?” If the answer is 100 percent, the follow-up question is whether the data are always encrypted—at rest, in transit and in use. If it cannot be shown that all data are secured all of the time, the next step is to determine what is not protected and under what circumstances. The assessment finding would consist of a flat statement of the amount of unencrypted data susceptible to theft and a recitation of the potential value to an attacker in stealing each category of unprotected data. The readers of this blog know that data must be decrypted in order to be used and so would be quick to point out that “universal” encryption in use is, ultimately, a futile dream. There are vendors who, think otherwise, but let’s accept the fact that data will, at some time, be exposed within a computer’s memory. Is that a fault attributable to the data or to the memory and to the programs running in it? Experts say it’s the latter. In-memory attacks are fairly devious, but the solutions are not. Rebooting gets rid of them and antimalware programs that scan memory can find them. So a CFE can ask,” How often is each system rebooted?” and “Does your anti-malware software scan memory?

To the extent that software used for attacks is embedded in the programs themselves, the problem lies in a failure of malware protection or of change management. A CFE need not worry this point; according to my California presenter many auditors (and security professionals) have wrestled with this problem and not solved it either. All a CFE needs to ask is whether anyone would be able to know whether a program had been subverted. An audit of the change management process would often provide a bounty of findings, but would not answer the reviewer’s question. The solution lies in having a version of a program known to be free from flaws (such as newly released code) and an audit trail of

known changes. It’s probably beyond the talents of a typical CFE to generate a hash total using a program as data and then to apply the known changes in order to see if the version running in production matches a recalculated hash total. But it is not beyond the skills of IT expects the CFE can add to her team and for the in-house IM staff responsible keeping their employer’s programs safe. A CFE fraud risk reviewer need only find out if anyone is performing such a check. If not, the CFE can simply conclude and report to the client that no one knows for sure if the client’s programs have been penetrated or not.

Finally, a CFE might want to find out if the environment in which data are processed is even capable of being secured. Ancient software running on hardware or operating systems that have passed their end of life are probably not reliable in that regard. Here again, the CFE need only obtain lists and count. How many programs have not been maintained for, say, five years or more? Which operating systems that are no longer supported are still in use? How much equipment in the data center is more than 10 years old? All this is only a little arithmetic and common sense, not rocket science.

In conclusion, frauds associated with weakened or absent cybersecurity systems are not likely to become a less important feature of the corporate landscape over time. Instead, they are poised to become an increasingly important aspect of doing business for those who create automated applications and solutions, and for those who attempt to safeguard them on the front end and for those who investigate and prosecute crimes against them on the back end. While the ramifications of every cyber fraud prevention decision are broad and diverse, a few basic good practices can be defined which the CFE, the fraud expert, can help any client management implement:

  • Know your fraud risk and what it should be;
  • Be educated in management science and computer technology. Ensure that your education includes basic fraud prevention techniques and associated prevention controls;
  • Know your existing cyber fraud prevention decision model, including the shortcomings of those aspects of the model in current use and develop a schedule to address them;
  • Know your frauds. Understand the common fraud scenarios targeting your industry so that you can act swiftly when confronted with one of them.

We can conclude that the issues involving cybersecurity are many and complex but that CFE’s are equipped  to bring much needed, fraud related experience to any management’s table as part of the team in confronting them.

Bring Your Own Device – Revisited

BYODI was part of a lively discussion the other night at the monthly dinner meeting of one of the professional organizations I belong to between representatives of the two sides of the bring-your-own device (now expanded into bring your own technology!) debate.  And I must say that both sides presented a strong case with equally broad implications for the fraud prevention programs of their various employing organizations.

As I’m sure a majority of the readers of this blog are well aware, the bring-your-own device (BYOD) trend of enabling and empowering employees to bring their own devices (e.g., laptop, smartphones, tablets) evolved some time ago into ‘bring your own technology’ including office applications (e.g., word processing), authorized software (e.g., data analytics tools), operating systems, and other proprietary or open-source IT tools (e.g., software development kits, public cloud, communication aids) into the workplace.

On the pro side of the discussion at our table, it was pointed out that BYOD contributes to the creation of happier employees.  This is because many employees prefer to use their own devices over the often budget-dominated, basic devices offered by their company. Employees may also prefer to reduce the number of devices they carry while traveling; before BYOD, traveling employees would carry multiples of their personal and company provided devices (i.e., two mobile phones/smartphones, two laptops and so forth).

I myself must confess that I brought a personal laptop to work every day for years because it contained powerful investigative support software too expensive for my employer to provide at the time and because a vision problem made it difficult for me to use my desktop. I used my laptop almost daily although it was never connected to the corporate network, making it necessary for me to inconveniently move back and forth between the two devices.

Our bring-your-own device advocates then went on to say that implementation of a BYOD program can additionally result in a substantial financial savings to IS budgets because employees can use devices and other IS components they already possess. The savings include those made on the cost of purchase of devices by management for employees, on the on-going maintenance of these devices and on data plans (for voice and data services). These savings can then be utilized by the company to enhance its operating margins or to even offer more employee benefits.

Another of the BYOD advocates, employed in the IS division of her company, pointed out that her division was freed by the BYOD program from a myriad of tasks such as desktop support, trouble shooting and end-user hardware maintenance activities. She too agreed that, in her opinion, this saving could be best utilized by the IS division to optimize its budget and resources.  She also pointed out that the popularity of BYODT is due, in part, to the fact that, in her experience, employees, like herself, adopt technology well before their employers and subsequently bring these enhancements to work. Thus, BYOD results in faster adoption of new technologies, which can also be an enabler for employees to be more productive or creative; a competitive advantage for their entire business.  In addition, her right hand table companion made the argument that employees can use their own, familiar device to complete their tasks more efficiently as it gives them the flexibility to quickly customize their device or technology to run faster as per their individual requirements. By contrast, in the case of company-provided devices and technology, such tailoring and customization is often time-consuming as individual employees have to provide proper cost justifications and then seek authorization through cumbersome and time consuming change requests.

On the con side, the internal auditor at our table pointed out that by allowing employees to BYOD, the employers implementing the program have opened a new nightmare for their security managers and administrators and, hence, for their fraud prevention programs. The security governance framework and related corporate security and fraud prevention policies will need to be redefined and a great deal of effort will be required to make each policy efficiently operational and streamlined in the BYOD environment.

Of course, I then had to chime in and offer my two-cents worth that concerns related to privacy and data protection could be perhaps the biggest challenge for BYOD. In industries like health care and insurance that deal with sensitive and confidential data under strict Federal and State guideline such concerns would have to hinder any rollout of BYOD. Such enterprises will be compelled by law to tread cautiously with this trend. With BYODT organizational control over data is blurred. Objections are also always raised when business and private data exist on the same device. Thus, this could certainly interfere with meeting the stringent controls mandated by certain regulatory compliance requirements.

Then our auditor friend pointed out that applications and tools may not be uniform on all devices, which can result in incompatibility when trying to, for example, connect to the corporate network or access a Word file created by another employee who has purchased a newer version.  And what about a lack of consensus among employees; some may not be willing or able to use their personal devices or software for company work.

After listening to (and participating in) the excellent arguments on both sides of the supper table, might I suggest that, the still developing trend and the very real benefits realized from BYOD suggest that the valid concerns (which this blog has certainly raised in the past) might best be considered as normal business challenges and that companies should address BYOD implementation by addressing these challenges. There are certainly steps (as the ACFE has point out) that can be taken to significantly reduce the risk of fraud.

First, establish a well-defined BYOD framework.  This can be done by soliciting input from various business process owners and units of the enterprise regarding how different areas actually use portable gadgets. This helps create a uniform governance strategy. Following are what many consider essential steps for creating a BYOD governance framework:

– -Network access control:

  1. Determine which devices are allowed on the network.
  2. Determine the level of access (e.g., guest, limited, full) that can be granted to these devices.
  3. Define the who, what, where and when of network access.
  4. Determine which groups of employees are allowed to use these devices.

— Device management control:

  1. Inventory authorized and unauthorized devices.
  2. Inventory authorized and unauthorized users.
  3. Ensure continual vulnerability assessment and remediation of the devices connected.
  4. Create mandatory and acceptable endpoint security components (e.g., updated and functional antivirus software, updated security patch, level of browser security settings) to be present on these devices.

— Application security management control:

  1. Determine which operating systems and versions are allowed on the network.
  2. Determine which applications are mandatory (or prohibited) for each device.
  3. Control enterprise application access on a need-to-know basis.
  4. Educate employees about the BYOD policy.

Create a BYOD policy.  Make sure there is a clearly defined policy for BYOD that outlines the rules of engagement and states the company’s expectations. The policy should also state and define minimum security requirements and may even mandate company-sanctioned security tools as a condition for allowing personal devices to connect to company data and network resources.  As far as security polices over BYOD go, such requirements should be addressed by having the IT staff provide detailed security requirements for each type of personal device that is used in the workplace and connected to the corporate network.

So, BYOD provides numerous benefits to the business, the key ones being reducing the IT budget and the IT department’s workload, faster adaptation to newer technology, and making employees happier by giving them flexibility to use and customize their devices to enhance efficiency at work. Of course, various challenges come along with these advantages: increased security measures, more stringent controls for privacy and data protection, and other regulatory compliance. These challenges provide a fundamentally new opportunity for innovation, redefining the governance structure and adoption of underlying technology.  CFE’s can add value to this entire challenge by on-going review of the overall corporate approach to BYODT for its impact on the fraud risk assessment and overall fraud prevention program.

Volunteering for Fraud

identity-theftOur Chapter has a member who has just completed work on an interesting identity theft case.  It seems the victim provided various items of highly specific, identifiable personal information to a local, specialty retailer in exchange for a verbal agreement to provide a discount card and store credit.  Whether the information was subsequently hacked or just carelessly shared, sold or handled by the retailer is still unclear but what is certain is that this identical information was used by fraudsters, with other meta data, to build two different, highly credible, loan applications, one of which was approved by a financial institution.

Our member’s case is an example of the all too real risk posed by voluntarily shared information. In our desire to use services of various kinds – for efficiency, productivity, profit or just for fun – we all seem to find ourselves agreeing to terms and conditions that we may not even see or read or, if we read them, not fully comprehend. A moments reflection would lead any knowledgeable auditor to the conclusion that this amounts to contractual sharing of data even though the “contract” might not even refer to any direct exchange for consideration between the company and the patron, but rather is just for the use of the offeror’s infrastructure; this practice results in trillions of elements of data that the owner of the infrastructure controls, aggregates and uses for its own economic gain.  While simple transactional data associated with the payment process have a definite cycle, voluntarily supplied personal data becomes perpetual. The intentions behind the former are usually tacitly articulated and apply within the realm of the specific payment arrangements between the agreeing parties. In contrast, voluntarily supplied personal data are generally timeless, can be “sliced and diced” using data mining, and can be further masked and shared for the economic gain of the infrastructure owner and of its business partners and, possibly, its customers.

We can think of data volunteerism as the act of volunteering personal information on the part of a user when, in fact, that user might not necessarily want or mean to do so. It’s not so much consent to share personal data, but rather lack of dissent in sharing data. Passivity or inertia on the part of the personal data sharer plays an important role in one’s attraction to data volunteerism. Immediate perceived benefits of seeking the offered services and, thus, benefiting from them, outweigh anything that the user vaguely understands as the costs of doing so under the service provider’s terms and conditions agreed to by the user.

Before clicking the “I agree” button on an agreement of use, how often have we all paused and analyzed the contents of the agreement? Such agreements are generally long, filled with legalese and we feel like we’re wasting time in getting to the services provided by that company or app that just popped the agreement on our screen. According to ACFE, under the prospect theory of decision-making behavior, losses are weighted more heavily than gains. And now here we are, delaying the immediate gratification of using some cool phone service. And so we all fall into the vulnerability of allowing an apparently harmless verbal or written agreement stand in the way of doing something we want to do right now. As with the case of our member’s client, people willingly share personal information when they are nudged by a sales clerk or by a new app on their phone to do so. The perceived immediate benefits seem to outweigh any remotely noticed costs of volunteering the information.

All of this has broad implications for fraud examination and for law enforcement.  Every non-cash payment transaction involves the exchange of personal identifying information on some level.   Bank checks, written contracts, account passwords, phone numbers and a host of other identifying information are both the life blood of the financial system and the continuous targets of every type of thief.  Nothing financial happens until personal data are exchanged and the more aggregated elements of data fraudsters have about anyone at their command the easier their job becomes.

As fraud examiners we should strive to make our clients aware of the general ground rules for the sharing of personal data propagated by the ACFE and others:

  1. The giver must have knowingly consented to the collection, use or disclosure of personal information.
  2. Consent must be obtained in a meaningful way, generally requiring that organizations communicate the purposes for collection, so that the giver will reasonably know and understand how the information will be collected, used or disclosed.
  3. Organizations must create a higher threshold for consent by contemplating different forms of consent depending on the nature of information and its sensitivity.
  4. In a giver-receiver relationship, consent is dynamic and ongoing. It is implied all the time that the giver grants the privilege of use to the information receiver and that the privilege is only good as long as the giver’s consent is not withdrawn.
  5. The receiver has a duty to adequately safeguard the personal data entrusted to it.

A legal definition of consent is hard to find. The common law context suggests that consent is a “freely given agreement.” An agreement, contractual or by choice, implies a particular aim or object. While it is clear that the force of laws and regulations is necessary, in the end, what equally matters is the behavior of the user. Concepts and paradigms such as bounded rationality and prospect theory point to the vulnerability of human users in exercising consent. If that is where the failure occurs, privacy issues will only propagate, not get better. Finally, remember that privacy solutions embedded in the technology to empower users to protect their privacy are only as good as the motivation, knowledge and determination of the user.

As fraud examiners and assurance professionals we have to face the fact that not all our user/clients are equally technology savvy; not all users consider it worth their time to navigate through privacy monitors in a retail store or in an on-line app to feel safe. And generally, all users, indeed all of us, are creatures of bounded rationality.

Costs of cyber crime in 2015 were an estimated US $1.52 billion in the US alone and US $221 billion globally. These criminals find a bonanza if they can successfully perpetrate a data breach in which they break into a system and/or database to steal personally identifiable information (e.g., addresses, social security numbers, financial account numbers), or better yet, data on credit/debit cards.

Data volunteerism nudges people to share more and more personal information. This results in a huge pool of data across companies and institutions. If hard surveillance, such as the use of a camera watching over a parking lot, is concretely vivid, soft surveillance remains buried in the technology, allowing it to work freely on available data and metadata. As this use of data by app providers and others becomes wider and stronger and related frauds proliferate, the public could lose trust in these providers and the loss of trust would translate into loss of sales for the provider. The best way for CFE’s to address these issues for all stakeholders is through client education on the ACFE’s ground rules for self-protection in the sharing of personal information.

The Fire Alarm & the Bottom Line

fire-alarmI was having lunch with a couple of colleagues yesterday and the topic of ‘pulling the fire alarm’ came up.  Specifically, ‘pulling the fire alarm’ relates to a corporate employee alerting management about the suspected fraudulent activity of a fellow employee.  Everyone at the table agreed that the main reason management is often deprived of this vital intelligence is that your typical employee has a very hard time getting his or her head around the fact that their personally well-known co-worker can even be deceptive or dishonest, let alone actually steal something.

CFE’s are trained to know that good people can be, and often are, deceptive.  When people think of deception, they often envision being tricked or having the wool pulled over their eyes. Although fraudulent acts are frequently acts of deception, the fallacy lies in believing that individuals within “our organization” would never commit a deceptive act. After all, our conflicted employee tells herself, our organization goes to great lengths to hire top-notch talent who will be loyal and faithful. Our potential whistle-blower is aware that company employees are promoted through the ranks into leadership roles only because they’ve displayed some unique attributes related to their individual knowledge or talent.

ACFE interviews with fraudsters tell us that the psychological impact of events on professionals in today’s world is difficult to predict. Individuals who’re typically reasonable and display high integrity can frequently be placed in situations where both personal and professional stress can impact their decisions and actions in ways they may have never imagined. This is where the almost universal tendency to bestow the dangerous gift of the benefit of the doubt must be countered.  No question that organizations must encourage that general openness and transparency in everyday actions be practiced by their employees at all levels. But employees must also be made to understand that if someone questions an action or event, established outlets are available to report those concerns without the fear of repercussions. A specific example that unintentionally supports the benefit of the doubt syndrome is an instance where an employee repeatedly performs an inappropriate action among a group of co-workers within the corporate setting. Someone who witnesses the act may not feel comfortable speaking up at the time of the occurrence, especially if the person performing the action is his or her superior in the corporate hierarchy. However, that doesn’t mean it’s okay to walk away from the situation and say nothing. The outlets to report concerns may be as simple as speaking to a supervisor, contacting a human resources representative, or even calling the employee hotline. Employees must be encouraged to speak up whenever they see activity occurring that they believe is inappropriate. If they don’t, they’re perpetuating a culture of denial and silent acceptance.

Such a culture of silent acceptance can grow almost imperceptibly until the organization can irrationally come to unconsciously believe it’s immune to fraud.   My luncheon companions agreed that this syndrome is entirely natural given that all organizations want to believe they’re immune to fraud; then the table talk turned to the following interesting and related points…

It’s unfortunate that it takes some shattering event like a major embezzlement to make some organizations face the fact that fraud doesn’t discriminate; it can happen anywhere, any time. Just as individuals may rationalize why it’s okay to commit fraud, organizations sometimes attempt to rationalize the “whys” that support their belief that fraud won’t happen to them. Every CFE has seen instances of this defensive stance even during on-going fraud examinations! There can be multiple beliefs within corporate cultures that contribute to this act of rationalization. What one person views as a very strict policy, another person may see as a simple guideline open to interpretation. It’s always important to maintain several levels of defense against fraud, including multiple-preventive and detective controls. Because it is not possible to provide absolute assurance against fraud, it becomes even more critical to ensure that controls in place are sufficient to place periodic roadblocks, warning signs, or the proverbial fire alarm in appropriate places. It also is important that those controls and warning signs are uniformly applied to all employees within the organizational ranks.

Then there’s the old canard about materiality. Almost the first question you get about a suspected fraud, especially in my experience from financial personnel, is “Is it material?” meaning is it material to the financial statements. The implication is that the discovered fraud isn’t that important because it will have little or no effect on the bottom line. The ACFE tells us that fraud is dynamic and often can occur long before there is any significant impact to the financial statements. For example, frauds resulting in identity and information theft may eventually prove to have financial ramifications. However, the initial ramifications are breach of identity and information confidentiality. The question about materiality is one of the signs that management may not fully understand the variance between control gaps, which may create opportunity for inappropriate actions or actual control failures. When it comes to fraud prevention, the question shouldn’t be, “How much was taken or how much did we lose?” but instead, “What fraud opportunity has been created from the control gap identified?” Thus, no fraud is ever immaterial because even a small amount of identified stolen money may only be the tip of the iceberg. Where one fraud has been identified, there may be several related others operative but not yet detected.

In today’s technological world sophisticated information systems include workflow, authority delegation, acceptance reporting, system alerts, and intrusion technology. These processes rely on programming controls and periodic monitoring techniques to ensure access is in line with company objectives. Although these system enhancements have improved efficiency in many ways, there are often loopholes that provide a knowledgeable, often high-level, individual with the opportunity to rationalize or take advantage of poorly designed procedures to support a wide range of fraudulent activity. So, “authorized” can represent a danger if managements place too much reliance on system-established fraud prevention controls and then don’t build in mechanisms to appropriately monitor and manage those controls.  The simplest example of unauthorized transactions is illustrated in how delegation of authority is established and maintained within systems. If authority delegations are established with no end-date, or extended to individuals at a lower responsibility level than the true need, then expenditures may not be approved in line with corporate guidelines. This may seem like a minor control gap, but the potential for fraud, waste and abuse can be significant. And, if this trend goes undetected for an extended period, the risk can become even greater.

Another example may be the use of administrative user IDs for management, granting administrative access to systems and financial accounts. There is a very distinct and established purpose for granting this type of access; however, if the granting of the IDs is not well-controlled or monitored, there can be a significant internal control exposure that creates the opportunity for a potentially high level of fraudulent behavior to occur. This doesn’t mean that just because a company has excessive administrative IDs, it can expect that fraud is occurring within its corporate environs. However, those of us around the table agreed that this is why senior management and the board need to understand the reality of an administrative fraud control gap. In case after case, overuse and poor monitoring of these types of IDs by senior corporate officials (like CFO’s and CEO’s) have created the threat or opportunity for some activity that may not be acceptable to the organization.

Fraudsters are continually evolving, just like the rest of society. As CFE’s, we’re painfully aware that unauthorized transactions don’t always occur just because of external hacking, although the very real hacking threat seems the current obsession. Assurance professionals mustn’t overlook all of the internal fraud possibilities and probabilities that are present due to sophisticated business systems. Fraud in the digital age continues to expand and mature. We have to assist our client organizations to take an on-going, proactive approach to the examination and identification of ways that a myriad type of unauthorized transactions can slip through their internal firewalls and control procedures.