Tag Archives: cyber-security

Targeting the Blockchain

Both the blockchain and its digital engineering support structures underlying the digital currencies that are fast becoming the financial and transactional media of choice for the nefarious, are now increasingly finding themselves under various modes of fraudster attack.

Bitcoins, the most familiar blockchain application, were invented in 2009 by a mysterious person (or group of people) using the alias Satoshi Nakamoto, and the coins are created or ‘mined’ by solving increasingly difficult mathematical equations, requiring extensive computing power. The system is designed to ensure no more than twenty-one million Bitcoins are ever generated, thereby preventing a central authority from flooding the market with new Bitcoins. Most Bitcoins are purchased on third-party exchanges with traditional currencies, such as dollars or euros, or with credit cards. The exchange rates against the dollar for Bitcoin fluctuate wildly and have ranged from fifty cents per coin around the time of its introduction to over $1,240 in 2013 to around $600 today.

The whole point of using a blockchain is to let people, in particular, people who don’t trust one another, share valuable data in a secure, tamper-proof way. That’s because blockchains store data using sophisticated math and innovative software rules that are extremely difficult for attackers to manipulate. But as cases like the Mount Gox Bitcoin hack demonstrate, the security of even the best designed blockchain and associated support systems can fail in places where the fancy math and software rules come into contact with humans; humans who are skilled fraudsters, in the real world, where things quickly get messy. For CFEs to understand why, start with what makes blockchains “secure” in principle. Bitcoin is a good example. In Bitcoin’s blockchain, the shared data is the history of every Bitcoin transaction ever made: it’s a plain old accounting ledger. The ledger is stored in multiple copies on a network of computers, called “nodes:’ Each time someone submits a transaction to the ledger, the nodes check to make sure the transaction is valid, that whoever spent a bitcoin had a bitcoin to spend. A subset of the nodes competes to package valid transactions into “blocks” and add them to a chain of previous blocks. The owners of these nodes are called miners. Miners who successfully add new blocks to the chain earn bitcoins as a reward.

What makes this system theoretically tamperproof is two things: a cryptographic fingerprint unique to each block, and a consensus protocol, the process by which the nodes in the network agree on a shared history. The fingerprint, called a hash, takes a lot of computing time and energy to generate initially. It thus serves as proof that the miner who added the block to the blockchain did the computational work to earn a bitcoin reward (for this reason, Bitcoin is said to employ a proof-of-work protocol). It also serves as a kind of seal, since altering the block would require generating a new hash. Verifying whether or not the hash matches its block, however, is easy, and once the nodes have done so they update their respective copies of the blockchain with the new block. This is the consensus protocol.

The final security element is that the hashes also serve as the links in the blockchain: each block includes the previous block’s unique hash. So, if you want to change an entry in the ledger retroactively, you have to calculate a new hash not only for the block it’s in but also for every subsequent block. And you have to do this faster than the other nodes can add new blocks to the chain. Consequently, unless you have computers that are more powerful than the rest of the nodes combined (and even then, success isn’t guaranteed), any blocks you add will conflict with existing ones, and the other nodes will automatically reject your alterations. This is what makes the blockchain tamperproof, or immutable.

The reality, as experts are increasingly pointing out, is that implementing blockchain theory in actual practice is difficult. The mere fact that a system works like Bitcoin, as many copycat cryptocurrencies do, doesn’t mean it’s just as secure as Bitcoin. Even when developers use tried and true cryptographic tools, it’s easy to accidentally put them together in ways that are not secure. Bitcoin has been around the longest, so it’s just the most thoroughly battle-tested.

As the ACFE and others have indicated, fraudsters have also found creative ways to cheat. Its been shown that there is a way to subvert a blockchain even if you have less than half the mining power of the other miners. The details are somewhat technical, but essentially a “selfish miner” can gain an unfair advantage by fooling other nodes into wasting time on already-solved crypto-puzzles.

The point is that no matter how tamperproof a blockchain protocol is, it does not exist in a vacuum. The cryptocurrency hacks driving recent headlines are usually failures at places where blockchain systems connect with the real world, for example, in software clients and third-party applications. Hackers can, for instance, break into hot wallets, internet-connected applications for storing the private cryptographic keys that anyone who owns cryptocurrency requires in order to spend it. Wallets owned by online cryptocurrency exchanges have become prime targets. Many exchanges claim they keep most of their users’ money in cold hardware wallets, storage devices disconnected from the internet. But as the recent heist of more than $500 million worth of cryptocurrency from a Japan based exchange showed, that’s not always the case.

Perhaps the most complicated touchpoints between blockchains and the real world are smart contracts, which are computer programs stored in certain kinds of blockchain that can automate financial and other contract related business transactions. Several years ago, hackers exploited an unforeseen quirk in a smart contract written on Ethereum’s blockchain to steal 3.6 million Ether, worth around $80 million at the time from a new kind of blockchain-based investment fund. Since the investment fund’s code lived on the blockchain, the Ethereum community had to push a controversial software upgrade called a hard fork to get the money back, essentially creating a new version of history in which the money was never stolen. According to a number of experts, researchers are scrambling to develop other methods for ensuring that smart contracts won’t malfunction.

An important supposed security guarantee of a blockchain system is decentralization. If copies of the blockchain are kept on a large and widely distributed network of nodes, there’s no one weak point to attack, and it’s hard for anyone to build up enough computing power to subvert the network. But recent reports in the trade press indicate that neither Bitcoin nor Ethereum is as decentralized as the public has been led to believe. The reports indicate that the top four bitcoin-mining operations had more than 53 percent of the system’s average mining capacity per week. By the same measure, three Ethereum miners accounted for 61 percent of Ethereum transactions.

Some experts say alternative consensus protocols, perhaps ones that don’t rely on mining, could be more secure. But this hypothesis hasn’t been tested at a large scale, and new protocols would likely have their own security problems. Others see potential in blockchains that require permission to join, unlike in Bitcoin’s case, where anyone who downloads the software can join the network.

Such consensus systems are anathema to the antihierarchical ethos of cryptocurrencies, but the approach appeals to financial and other institutions looking to exploit the advantages of a shared cryptographic database. Permissioned systems, however, raise their own questions. Who has the authority to grant permission? How will the system ensure that the validators are who they say they are? A permissioned system may make its owners feel more secure, but it really just gives them more control, which means they can make changes whether or not other network participants agree, something true believers would see as violating the very idea of blockchain.

So, in the end, for CFEs, the word ‘secure’ ends up being very hard to define in the context of blockchains. Secure from whom? Secure for what?

A final thought for CFEs and forensic accountants. There are no real names stored on the Bitcoin blockchain, but it records every transaction made by your user client; every time the currency is used the user risks exposing information that can tie his or her identity to those actions. It is known from documents leaked by Edward Snowden that the US National Security Agency has sought ways of connecting activity on the Bitcoin blockchain to people in the physical world. Should governments seek to create and enforce blacklists, they will find that the power to decide which transactions to honor may lie in the hands of just a few Bitcoin miners.

The Anti-Fraud Blockchain

Blockchain technology, the series of interlocking algorithms powering digital currencies like BitCoin, is emerging as a potent fraud prevention tool.  As every CFE knows, technology is enabling new forms of money and contracting, and the growing digital economy holds great promise to provide a full range of new financial tools, especially to the world’s poor and unbanked. These emerging virtual currencies and financial techniques are often anonymous, and none have received quite as much press as Bitcoin, the decentralized peer-to-peer digital form of money.

Bitcoins were invented in 2009 by a mysterious person (or group of people) using the alias Satoshi Nakamoto, and the coins are created or “mined” by solving increasingly difficult mathematical equations, requiring extensive computing power. The system is designed to ensure no more than twenty-one million Bitcoins are ever generated, thereby preventing a central authority from flooding the market with new Bitcoins. Most people purchase Bitcoins on third-party exchanges with traditional currencies, such as dollars or euros, or with credit cards. The exchange rates against the dollar for Bitcoin fluctuate wildly and have ranged from fifty cents per coin around the time of its introduction to over $16,0000 in December 2017. People can send Bitcoins, or percentages of bitcoin, to each other using computers or mobile apps, where coins are stored in digital wallets. Bitcoins can be directly exchanged between users anywhere in the world using unique alphanumeric identifiers, akin to e-mail addresses, and there are no transaction fees in the basic system, absent intermediaries.

Anytime a purchase takes place, it is recorded in a public ledger known as the “blockchain,” which ensures no duplicate transactions are permitted. Crypto currencies are called such because they use cryptography to regulate the creation and transfer of money, rather than relying on central authorities. Bitcoin acceptance continues to grow rapidly, and it is possible to use Bitcoins to buy cupcakes in San Francisco, cocktails in Manhattan, and a Subway sandwich in Allentown.

Because Bitcoin can be spent online without the need for a bank account and no ID is required to buy and sell the crypto currency, it provides a convenient system for anonymous, or more precisely pseudonymous, transactions, where a user’s true name is hidden. Though Bitcoin, like all forms of money, can be used for both legal and illegal purposes, its encryption techniques and relative anonymity make it strongly attractive to fraudsters and criminals of all kinds. Because funds are not stored in a central location, accounts cannot readily be seized or frozen by police, and tracing the transactions recorded in the blockchain is significantly more complex than serving a subpoena on a local bank operating within traditionally regulated financial networks. As a result, nearly all the so-called Dark Web’s illicit commerce is facilitated through alternative currency systems. People do not send paper checks or use credit cards in their own names to buy meth and pornography. Rather, they turn to anonymous digital and virtual forms of money such as Bitcoin.

A blockchain is, essentially, a way of moving information between parties over the Internet and storing that information and its transaction history on a disparate network of computers. Bitcoin, and all the other digital currencies, operates on a blockchain: as transactions are aggregated into blocks, each block is assigned a unique cryptographic signature called a “hash.” Once the validating cryptographic puzzle for the latest block has been solved by a coin mining computer, three things happen: the result is timestamped, the new block is linked irrevocably to the blocks before and after it by its unique hash, and the block and its hash are posted to all the other computers that were attempting to solve the puzzle involved in the mining process for new coins. This decentralized network of computers is the repository of the immutable ledger of bitcoin transactions.  If you wanted to steal a bitcoin, you’d have to rewrite the coin’s entire history on the blockchain in broad daylight.

While bitcoin and other digital currencies operate on a blockchain, they are not the blockchain itself. It’s an insight of many computer scientists that in addition to exchanging digital money, the blockchain can be used to facilitate transactions of other kinds of digitized data, such as property registrations, birth certificates, medical records, and bills of lading. Because the blockchain is decentralized and its ledger immutable, all these types of transactions would be protected from hacking; and because the blockchain is a peer-to-peer system that lets people and businesses interact directly with each other, it is inherently more efficient and  cheaper than current systems that are burdened with middlemen such as lawyers and regulators.

A CFE’s client company that aims to reduce drug counterfeiting could have its CFE investigator use the blockchain to follow pharmaceuticals from provenance to purchase. Another could use it to do something similar with high-end sneakers. Yet another, a medical marijuana producer, could create a blockchain that registers everything that has happened to a cannabis product, from seed to sale, letting consumers, retailers and government regulators know where everything came from and where it went. The same thing can be done with any normal crop so, in the same way that a consumer would want to know where the corn on her table came from, or the apple that she had at lunch originated, all stake holders involved in the medical marijuana enterprise would know where any batch of product originated and who touched it all along the way.

While a blockchain is not a full-on solution to fraud or hacking, its decentralized infrastructure ensures that there are no “honeypots” of data available, like financial or medical records on isolated company servers, for criminals to exploit. Still, touting a bitcoin-derived technology as an answer to cybercrime may seem a stretch considering the high-profile, and lucrative, thefts of cryptocurrency over the past few years. Its estimated that as of March 2015, a full third of  all Bitcoin exchanges, (where people store their bitcoin), up to then had been hacked, and nearly half had closed. There was, most famously, the 2014 pilferage of Mt. Gox, a Japanese based digital coin exchange, in which 850,000 bitcoins worth $460,000,000 disappeared. Two years later another exchange, Bitfinex, was hacked and around $60 million in bitcoin was taken; the company’s solution was to spread the loss to all its customers, including those whose accounts had not been drained.

Unlike money kept in a bank, cryptocurrencies are uninsured and unregulated. That is one of the consequences of a monetary system that exists, intentionally, beyond government control or oversight. It may be small consolation to those who were affected by these thefts that the bitcoin network itself and the blockchain has never been breached, which perhaps proves the immunity of the blockchain to hacking.

This security of the blockchain itself demonstrates how smart contracts can be written and stored on it. These are covenants, written in code, that specify the terms of an agreement. They are smart because as soon as its terms are met, the contract executes automatically, without human intervention. Once triggered, it can’t be amended, tampered with, or impeded. This is programmable money. Such smart contracts are a tool with the potential to change how business in done. The concept, as with digital currencies, is based on computers synced together. Now imagine that rather than syncing a transaction, software is synced. Every machine in the network runs the same small program. It could be something simple, like a loan: A sends B some money, and B’s account automatically pays it back, with interest, a few days later. All parties agree to these terms, and it’s locked in using the smart contract. The parties have achieved programmable money!

There is no doubt that smart contracts and the blockchain itself will augment the trend toward automation, though it is automation through lines of code, not robotics. For businesses looking to cut costs and reduce fraud, this is one of the main attractions of blockchain technology. The challenge is that, if contracts are automated, what will happen to traditional firm control structures, processes, and intermediaries like lawyers and accountants? And what about managers? Their roles would all radically change. Most blockchain advocates imagine them changing so radically as to disappear altogether, taking with them many of the costs currently associated with doing business. According to a recent report in the trade press, the blockchain could reduce banks’ infrastructure costs attributable to cross-border payments, securities trading, and regulatory compliance by $15-20 billion per annum by 2022.  Whereas most technologies tend to automate workers on the periphery, blockchain automates away the center. Instead of putting the taxi driver out of a job, blockchain puts Uber out of a job and lets the taxi drivers work with the customer directly.

Whether blockchain technology will be a revolution for good or one that continues what has come to seem technology’s inexorable, crushing ascendance will be determined not only by where it is deployed, but how. The blockchain could be used by NGOs to eliminate corruption in the distribution of foreign aid by enabling funds to move directly from giver to receiver. It is also a way for banks to operate without external oversight, encouraging other kinds of corruption. Either way, we as CFEs would be wise to remember that technology is never neutral. It is always endowed with the values of its creators. In the case of the blockchain and crypto-currency, those values are libertarian and mechanistic; trust resides in algorithmic rules, while the rules of the state and other regulatory bodies are often viewed with suspicion and hostility.

From Inside the Building

By Rumbi Petrozzello, CFE, CPA/CFF
2017 Vice-President – Central Virginia Chapter ACFE

Several months ago, I attended an ACFE session where one of the speakers had worked on the investigation of Edward Snowden. He shared that one of the ways Snowden had gained access to some of the National Security Agency (NSA) data that he downloaded was through the inadvertent assistance of his supervisor. According to this investigator, Snowden’s supervisor shared his password with Snowden, giving Snowden access to information that was beyond his subordinate’s level of authorization. In addition to this, when those security personnel reviewing downloads made by employees noticed that Snowden was downloading copious amounts of data, they approached Snowden’s supervisor to question why this might be the case. The supervisor, while acknowledging this to be true, stated that Snowden wasn’t really doing anything untoward.

At another ACFE session, a speaker shared information with us about how Chelsea Manning was able to download and remove data from a secure government facility. Manning would come to work, wearing headphones, listening to music on a Discman. Security would hear the music blasting and scan the CDs. Day after day, it was the same scenario. Manning showed up to work, music blaring.  Security staff grew so accustomed to Manning, the Discman and her CDs that when she came to work though security with a blank CD boldly labelled “LADY GAGA”, security didn’t blink. They should have because it was that CD and ones like it that she later carried home from work that contained the data she eventually shared with WikiLeaks.

Both these high-profile disasters are notable examples of the bad outcome arising from a realized internal threat. Both Snowden and Manning worked for organizations that had, and have, more rigorous security procedures and policies in place than most entities. Yet, both Snowden and Manning did not need to perform any magic tricks to sneak data out of the secure sites where the target data was held; it seems that it all it took was audacity on the one side and trust and complacency on the other.

When organizations deal with outside parties, such as vendors and customers, they tend to spend a lot of time setting up the structures and systems that will guide how the organization will interact with those vendors and customers. Generally, companies will take these systems of control seriously, if only because of the problems they will have to deal with during annual external audits if they don’t. The typical new employee will spend a lot of time learning what the steps are from the point when a customer places an order through to the point the customer’s payment is received. There will be countless training manuals to which to refer and many a reminder from co-workers who may be negatively impacted if the rooky screws up.

However, this scenario tends not to hold up when it comes to how employees typically share information and interact with each other. This is true despite the elevated risk that a rogue insider represents. Often, when we think about an insider causing harm to a company through fraudulent acts, we tend to imagine a villain, someone we could identify easily because s/he is obviously a terrible person. After all, only a terrible person could defraud their employer. In fact, as the ACFE tells us, the most successful fraudsters are the ones who gain our trust and who, therefore, don’t really have to do too much for us to hand over the keys to the kingdom. As CFEs and Forensic Accountants, we need to help those we work with understand the risks that an insider threat can represent and how to mitigate that risk. It’s important, in advising our clients, to guide them toward the creation of preventative systems of policy and procedure that they sometimes tend to view as too onerous for their employees. Excuses I often hear run along the lines of:

• “Our employees are like family here, we don’t need to have all these rules and regulations”

• “I keep a close eye on things, so I don’t have to worry about all that”

• “My staff knows what they are supposed to do; don’t worry about it.”

Now, if people can easily walk sensitive information out of locations that have documented systems and are known to be high security operations, can you imagine what they can do at your client organizations? Especially if the employer is assuming that their employees magically know what they are supposed to do? This is the point that we should be driving home with our clients. We should look to address the fact that both trust and complacency in organizations can be problems as well as assets. It’s great to be able to trust employees, but we should also talk to our clients about the fraud triangle and how one aspect of it, pressure, can happen to any staff member, even the most trusted. With that in mind, it’s important to institute controls so that, should pressure arise with an employee, there will be little opportunity open to that employee to act. Both Manning and Snowden have publicly spoken about the pressures they felt that led them to act in the way they did. The reason we even know about them today is that they had the opportunity to act on those pressures. I’ve spent time consulting with large organizations, often for months at a time. During those times, I got to chat with many members of staff, including security. On a couple of occasions, I forgot and left my building pass at home. Even though I was on a first name basis with the security staff and had spent time chatting with them about our personal lives, they still asked me for identification and looked me up in the system. I’m sure they thought I was a nice and trustworthy enough person, but they knew to follow procedures and always checked on whether I was still authorized to access the building. The important point is that they, despite knowing me, knew to check and followed through.

Examples of controls employees should be reminded to follow are:

• Don’t share your password with a fellow employee. If that employee cannot access certain information with their own password, either they are not authorized to access that information or they should speak with an administrator to gain the desired access. Sharing a password seems like a quick and easy solution when under time pressures at work, but remind employees that when they share their login information, anything that goes awry will be attributed to them.

• Always follow procedures. Someone looking for an opportunity only needs one.

• When something looks amiss, thoroughly investigate it. Even if someone tells you that all is well, verify that this is indeed the case.

• Explain to staff and management why a specific control is in place and why it’s important. If they understand why they are doing something, they are more likely to see the control as useful and to apply it.

• Schedule training on a regular basis to remind staff of the controls in place and the systems they are to follow. You may believe that staff knows what they are supposed to do, but reminding them reduces the risk of them relying on hearsay and secondhand information. Management is often surprised by what they think staff knows and what they find out the staff really knows.

It should be clear to your clients that they have control over who has access to sensitive information and when and how it leaves their control. It doesn’t take much for an insider to gain access to this information. A face you see smiling at you daily is the face of a person you can grow comfortable with and with whom you can drop your guard. However, if you already have an adequate system and effective controls in place, you take the personal out of the equation and everyone understands that we are all just doing our job.

Sock Puppets

The issue of falsely claimed identity in all its myriad forms has shadowed the Internet since the beginning of the medium.  Anyone who has used an on-line dating or auction site is all too familiar with the problem; anyone can claim to be anyone.  Likewise, confidence games, on or off-line, involve a range of fraudulent conduct committed by professional con artists against unsuspecting victims. The victims can be organizations, but more commonly are individuals. Con artists have classically acted alone, but now, especially on the Internet, they usually group together in criminal organizations for increasingly complex criminal endeavors. Con artists are skilled marketers who can develop effective marketing strategies, which include a target audience and an appropriate marketing plan: crafting promotions, product, price, and place to lure their victims. Victimization is achieved when this marketing strategy is successful. And falsely claimed identities are always an integral component of such schemes, especially those carried out on-line.

Such marketing strategies generally involve a specific target market, which is usually made up of affinity groups consisting of individuals grouped around an objective, bond, or association like Facebook or LinkedIn Group users. Affinity groups may, therefore, include those associated through age, gender, religion, social status, geographic location, business or industry, hobbies or activities, or professional status. Perpetrators gain their victims’ trust by affiliating themselves with these groups.  Historically, various mediums of communication have been initially used to lure the victim. In most cases, today’s fraudulent schemes begin with an offer or invitation to connect through the Internet or social network, but the invitation can come by mail, telephone, newspapers and magazines, television, radio, or door-to-door channels.

Once the mark receives and accepts the offer to connect, some sort of response or acceptance is requested. The response will typically include (in the case of Facebook or LinkedIn) clicking on a link included in a fraudulent follow-up post to visit a specified web site or to call a toll-free number.

According to one of Facebook’s own annual reports, up to 11.2 percent of its accounts are fake. Considering the world’s largest social media company has 1.3 billion users, that means up to 140 million Facebook accounts are fraudulent; these users simply don’t exist. With 140 million inhabitants, the fake population of Facebook would be the tenth-largest country in the world. Just as Nielsen ratings on television sets determine different advertising rates for one television program versus another, on-line ad sales are determined by how many eyeballs a Web site or social media service can command.

Let’s say a shyster want 3,000 followers on Twitter to boost the credibility of her scheme? They can be hers for $5. Let’s say she wants 10,000 satisfied customers on Facebook for the same reason? No problem, she can buy them on several websites for around $1,500. A million new friends on Instagram can be had for only $3,700. Whether the con man wants favorites, likes, retweets, up votes, or page views, all are for sale on Web sites like Swenzy, Fiverr, and Craigslist. These fraudulent social media accounts can then be freely used to falsely endorse a product, service, or company, all for just a small fee. Most of the work of fake account set up is carried out in the developing world, in places such as India and Bangladesh, where actual humans may control the accounts. In other locales, such as Russia, Ukraine, and Romania, the entire process has been scripted by computer bots, programs that will carry out pre-encoded automated instructions, such as “click the Like button,” repeatedly, each time using a different fake persona.

Just as horror movie shape-shifters can physically transform themselves from one being into another, these modern screen shifters have their own magical powers, and organizations of men are eager to employ them, studying their techniques and deploying them against easy marks for massive profit. In fact, many of these clicks are done for the purposes of “click fraud.” Businesses pay companies such as Facebook and Google every time a potential customer clicks on one of the ubiquitous banner ads or links online, but organized crime groups have figured out how to game the system to drive profits their way via so-called ad networks, which capitalize on all those extra clicks.

Painfully aware of this, social media companies have attempted to cut back on the number of fake profiles. As a result, thousands and thousands of identities have disappeared over night among the followers of many well know celebrities and popular websites. If Facebook has 140 million fake profiles, there is no way they could have been created manually one by one. The process of creation is called sock puppetry and is a reference to the children’s toy puppet created when a hand is inserted into a sock to bring the sock to life. In the online world, organized crime groups create sock puppets by combining computer scripting, web automation, and social networks to create legions of online personas. This can be done easily and cheaply enough to allow those with deceptive intentions to create hundreds of thousands of fake online citizens. One only needs to consult a readily available on-line directory of the most common names in any country or region. Have a scripted bot merely pick a first name and a last name, then choose a date of birth and let the bot sign up for a free e-mail account. Next, scrape on-line photo sites such as Picasa, Instagram, Facebook, Google, and Flickr to choose an age-appropriate image to represent your new sock puppet.

Armed with an e-mail address, name, date of birth, and photograph, you sign up your fake persona for an account on Facebook, LinkedIn, Twitter, or Instagram. As a last step, you teach your puppets how to talk by scripting them to reach out and send friend requests, repost other people’s tweets, and randomly like things they see Online. Your bots can even communicate and cross-post with one another. Before the fraudster knows it, s/he has thousands of sock puppets at his disposal for use as he sees fit. It is these armies of sock puppets that criminals use as key constituents in their phishing attacks, to fake on-line reviews, to trick users into downloading spyware, and to commit a wide variety of financial frauds, all based on misplaced and falsely claimed identity.

The fraudster’s environment has changed and is changing over time, from a face-to-face physical encounter to an anonymous on-line encounter in the comfort of the victim’s own home. While some consumers are unaware that a weapon is virtually right in front of them, others are victims who struggle with the balance of the many wonderful benefits offered by advanced technology and the painful effects of its consequences. The goal of law enforcement has not changed over the years; to block the roads and close the loopholes of perpetrators even as perpetrators continue to strive to find yet another avenue to commit fraud in an environment in which they can thrive. Today, the challenge for CFEs, law enforcement and government officials is to stay on the cutting edge of technology, which requires access to constantly updated resources and communication between organizations; the ability to gather information; and the capacity to identify and analyze trends, institute effective policies, and detect and deter fraud through restitution and prevention measures.

Now is the time for CFEs and other assurance professionals to continuously reevaluate all we for take for granted in the modern technical world and to increasingly question our ever growing dependence on the whole range of ubiquitous machines whose potential to facilitate fraud so few of our clients and the general public understand.

Analytics & Fraud Prevention

During our Chapter’s live training event last year, ‘Investigating on the Internet’, our speaker Liseli Pennings, pointed out that, according to the ACFE’s 2014 Report to the Nations on Occupational Fraud and Abuse, organizations that have proactive, internet oriented, data analytics in place have a 60 percent lower median loss because of fraud, roughly $100,000 lower per incident, than organizations that don’t use such technology. Further, the report went on, use of proactive data analytics cuts the median duration of a fraud in half, from 24 months to 12 months.

This is important news for CFE’s who are daily confronting more sophisticated frauds and criminals who are increasingly cyber based.  It means that integrating more mature forensic data analytics capabilities into a fraud prevention and compliance monitoring program can improve risk assessment, detect potential misconduct earlier, and enhance investigative field work. Moreover, forensic data analytics is a key component of effective fraud risk management as described in The Committee of Sponsoring Organizations of the Treadway Commission’s most recent Fraud Risk Management Guide, issued in 2016, particularly around the areas of fraud risk assessment, prevention, and detection.  It also means that, according to Pennings, fraud prevention and detection is an ideal big data-related organizational initiative. With the growing speed at which they generate data, specifically around their financial reporting and sales business processes, our larger CFE client organizations need ways to prioritize risks and better synthesize information using big data technologies, enhanced visualizations, and statistical approaches to supplement traditional rules-based investigative techniques supported by spreadsheet or database applications.

But with this analytics and fraud prevention integration opportunity comes a caution.  As always, before jumping into any specific technology or advanced analytics technique, it’s crucial to first ask the right risk or control-related questions to ensure the analytics will produce meaningful output for the business objective or risk being addressed. What business processes pose a high fraud risk? High-risk business processes include the sales (order-to-cash) cycle and payment (procure-to-pay) cycle, as well as payroll, accounting reserves, travel and entertainment, and inventory processes. What high-risk accounts within the business process could identify unusual account pairings, such as a debit to depreciation and an offsetting credit to a payable, or accounts with vague or open-ended “catch all” descriptions such as a “miscellaneous,” “administrate,” or blank account names?  Who recorded or authorized the transaction? Posting analysis or approver reports could help detect unauthorized postings or inappropriate segregation of duties by looking at the number of payments by name, minimum or maximum accounts, sum totals, or statistical outliers. When did transactions take place? Analyzing transaction activities over time could identify spikes or dips in activity such as before and after period ends or weekend, holiday, or off-hours activities. Where does the CFE see geographic risks, based on previous events, the economic climate, cyber threats, recent growth, or perceived corruption? Further segmentation can be achieved by business units within regions and by the accounting systems on which the data resides.

The benefits of implementing a forensic data analytics program must be weighed against challenges such as obtaining the right tools or professional expertise, combining data (both internal and external) across multiple systems, and the overall quality of the analytics output. To mitigate these challenges and build a successful program, the CFE should consider that the priority of the initial project matters. Because the first project often is used as a pilot for success, it’s important that the project address meaningful business or audit risks that are tangible and visible to client management. Further, this initial project should be reasonably attainable, with minimal dollar investment and actionable results. It’s best to select a first project that has big demand, has data that resides in easily accessible sources, with a compelling, measurable return on investment. Areas such as insider threat, anti-fraud, anti-corruption, or third-party relationships make for good initial projects.

In the health care insurance industry where I worked for many years, one of the key goals of forensic data analytics is to increase the detection rate of health care provider billing non-compliance, while reducing the risk of false positives. From a capabilities perspective, organizations need to embrace both structured and unstructured data sources that consider the use of data visualization, text mining, and statistical analysis tools. Since the CFE will usually be working as a member of a team, the team should demonstrate the first success story, then leverage and communicate that success model widely throughout the organization. Results should be validated before successes are communicated to the broader organization. For best results and sustainability of the program, the fraud prevention team should be a multidisciplinary one that includes IT, business users, and functional specialists, such as management scientists, who are involved in the design of the analytics associated with the day-to-day operations of the organization and hence related to the objectives of  the fraud prevention program. It helps to communicate across multiple departments to update key stakeholders on the program’s progress under a defined governance regime. The team shouldn’t just report noncompliance; it should seek to improve the business by providing actionable results.

The forensic data analytics functional specialists should not operate in a vacuum; every project needs one or more business champions who coordinate with IT and the business process owners. Keep the analytics simple and intuitive, don’t include too much information in one report so that it isn’t easy to understand. Finally, invest time in automation, not manual refreshes, to make the analytics process sustainable and repeatable. The best trends, patterns, or anomalies often come when multiple months of vendor, customer, or employee data are analyzed over time, not just in the aggregate. Also, keep in mind that enterprise-wide deployment takes time. While quick projects may take four to six weeks, integrating the entire program can easily take more than one or two years. Programs need to be refreshed as new risks and business activities change, and staff need updates to training, collaboration, and modern technologies.

Research findings by the ACFE and others are providing more and more evidence of the benefits of integrating advanced forensic data analytics techniques into fraud prevention and detection programs. By helping increase their client organization’s maturity in this area, CFE’s can assist in delivering a robust fraud prevention program that is highly focused on preventing and detecting fraud risks.

The CFE, Management & Cybersecurity

Strategic decisions affect the ultimate success or failure of any organization. Thus, they are usually evaluated and made by the top executives. Risk management contributes meaningfully and consistently to the organization’s success as defined at the highest levels. To achieve this objective, top executives first must believe there is substantial value to be gained by embracing risk management. The best way for CFEs and other risk management professionals to engage these executives is to align fraud risk management with achievement (or non-achievement) of the organization’s vital performance targets, and use it to drive better decisions and outcomes with a higher degree of certainty.

Next, top management must trust its internal risk management professional as a peer who provides valuable perspective. Every risk assurance professional must earn trust and respect by consistently exhibiting insightful risk and performance management competence, and by evincing a deep understanding of the business and its strategic vision, objectives, and initiatives. He or she must simplify fraud risk discussions by focusing on uncertainty relative to strategic objectives and by categorizing these risks in a meaningful way. Moreover, the risk professional must always be willing to take a contrarian position, relying on objective evidence where readily available, rather than simply deferring to the subjective. Because CFEs share many of these same traits, the CFE can help internal risk executives gain that trust and respect within their client organizations.

In the past, many organizations integrated fraud risk into the evaluation of other controls. Today, per COSO guidance, the adequacy of anti-fraud controls is specifically assessed as part of the evaluation of the control activities related to identified fraud risks. Managements that identify a gap related to the fraud risk assessments performed by CFEs and work to implement a robust assessment take away an increased focus on potential fraud scenarios specific to their organizations. Many such managements have implemented new processes, including CFE facilitated sessions with operating management, that allow executives to consider fraud in new ways. The fraud risk assessment can also raise management’s awareness of opportunities for fraud outside its areas of responsibility.

The blurred line of responsibility between an entity’s internal control system and those of outsourced providers creates a need for more rigorous controls over communication between parties. Previously, many companies looked to contracts, service-level agreements, and service organization reports as their approach to managing service organizations. Today, there is a need to go further. Specifically, there is a need for focus on the service providers’ internal processes and tone at the top. Implementing these additional areas of fraud risk assessment focus can increase visibility into the vendor’s performance, fraud prevention and general internal control structure.

Most people view risk as something that should be avoided or reduced. However, CFEs and other risk professionals realize that risk is valued when it can help achieve a competitive advantage. ACFE studies show that investors and other stakeholders place a premium on management’s ability to limit the uncertainty surrounding their performance projections, especially regarding fraud risk. With Information Technology budgets shrinking and more being asked from IT, outsourcing key components of IT or critical business processes to third-party cloud based providers is now common. Management should obtain a report on all the enterprise’s critical business applications and the related data that is managed by such providers. Top management should make sure that the organization has appropriate agreements in place with all service providers and that an appropriate audit of the provider’s operations, such as Service Organization Controls (SOC) 1 and SOC 2 assurance reports, is performed regularly by an independent party.

It’s also imperative that client management understand the safe harbor clauses in data breach laws for the countries and U.S. states where the organization does business.  In the United States, almost every state has enacted laws requiring organizations to notify the state in case of a data breach. The criteria defining what constitutes a data breach are similar in each state, with slight variations.

CFE vulnerability assessments should strive to impress on IT management that it should strive to make upper management aware of all major breach attempts, not just actual incidents, made against the organization. To see the importance of this it’s necessary only to open a newspaper and read about the serious data breaches occurring around the world on almost a daily basis. The definition of major may, of course, differ, depending on the organization’s industry and whether the organization is global, national, or local.  Additionally, top management and the board should plan to meet with the organization’s chief information security officer (CISO) at least once a year. This meeting should supplement the CFE’s annual update of the fraud risk assessment by helping management understand the state of cybersecurity within the organization and enabling top managers and directors to discuss key cybersecurity topics. It’s also important that the CISO is reporting to the appropriate levels within the organization. Keep in mind that although many CISOs continue to report within the IT organization, sometimes the chief information officer’s agenda conflicts with the CISO’s agenda. As such, the ACFE reports that a better reporting arrangement to promote independence is to migrate reporting lines to other officers such as the general counsel, chief operating officer, chief risk officer (CRO), or even the CEO, depending on the industry and the organization’s degree of dependence on technology.

As a matter of routine, every organization should establish relationships with the appropriate national and local authorities who have responsibility for cybersecurity or cybercrime response. For example, boards of U.S. companies should verify that management has protocols in place to guide contact with the Federal Bureau of Investigation (FBI) in case of a breech; the FBI has established its Key Partnership Engagement Unit, a targeted outreach program to senior executives of major private-sector corporations.

If there is a Chief Risk Officer (CRO) or equivalent, upper management and the board should, as with the CISO, meet with him or her quarterly or, at the least, annually and review all the fraud related risks that were either avoided or accepted. There are times when a business unit will identify a technology need that its executive is convinced is the right solution for the organization, even though the technology solution may have potential security risks. The CRO should report to the board about those decisions by business-unit executives that have the potential to expose the organization to additional security risks.

And don’t forget that management should be made to verify that the organization’s cyber insurance coverage is sufficient to address potential cyber risks. To understand the total potential impact of a major data breach, the board should always ask management to provide the cost per record of a data breach.

No business can totally mitigate every fraud related cyber risk it faces, but every business must focus on the vulnerabilities that present the greatest exposure. Cyber risk management is a multifaceted function that manages acceptance and avoidance of risk against the necessary actions to operate the business for success and growth, and to meet strategic objectives. Every business needs to regard risk management as an ongoing conversation between its management and supporting professionals, a conversation whose importance requires participation by an organization’s audit committee and other board members, with the CFE and the CISO serving increasingly important roles.

Cyber-security – Is There a Role for Fraud Examiners?

cybersecurityAt a cybersecurity fraud prevention conference, I attended recently in California one of the featured speakers addressed the difference between information security and cybersecurity and the complexity of assessing the fraud preparedness controls specifically directed against cyber fraud.  It seems the main difficulty is the lack of a standard to serve as the basis of a fraud examiner’s or auditor’s risk review. The National Institute of Standards and Technology’s (NIST) framework has become a de facto standard despite the fact that it’s more than a little light on specific details.  Though it’s not a standard, there really is nothing else at present against which to measure cybersecurity.  Moreover, the technology that must be the subject of a cybersecurity risk assessment is poorly understood and is mutating rapidly.  CFE’s, and everyone else in the assurance community, are hard pressed to keep up.

To my way of thinking, a good place to start in all this confusion is for the practicing fraud examiner to consider the fundamental difference between information security and cybersecurity, the differing nature of the threat itself.   There is simply a distinction between protecting information against misuse of all sorts (information security) and an attack by a government, a terrorist group, or a criminal enterprise that has immense resources of expertise, personnel and time, all directed at subverting one individual organization (cybersecurity).  You can protect your car with a lock and insurance but those are not the tools of choice if you see a gang of thieves armed with bricks approaching your car at a stoplight. This distinction is at the very core of assessing an organization’s preparations for addressing the risk of cyberattacks and for defending itself against them.

As is true in so many investigations, the cybersecurity element of the fraud risk assessment process begins with the objectives of the review, which leads immediately on to the questions one chooses to ask. If an auditor only wants to know “Are we secure against cyberattacks?” then the answer should be up on a billboard in letters fifty feet high: No organization should ever consider itself safe against cyber attackers. They are too powerful and pervasive for any complacency. If major television networks can be stricken, if the largest banks can be hit, if governments are not immune, then the CFE’s client organization is not secure either.  Still, all anti-fraud reviewers can ask subtle and meaningful questions of client management, specifically focused on the data and software at risk of an attack. A fraud risk assessment process specific to cybersecurity might delve into the internals of database management systems and system software, requiring the considerable skills of a CFE supported by one or more tech-savvy consultants s/he has engaged to form the assessment team. Or it might call for just asking simple questions and applying basic arithmetic.

If the fraud examiner’s concern is the theft of valuable information, the simple corrective is to make the data valueless, which is usually achieved through encryption. The CFE’s question might be, “Of all your data, what percentage is encrypted?” If the answer is 100 percent, the follow-up question is whether the data are always encrypted—at rest, in transit and in use. If it cannot be shown that all data are secured all of the time, the next step is to determine what is not protected and under what circumstances. The assessment finding would consist of a flat statement of the amount of unencrypted data susceptible to theft and a recitation of the potential value to an attacker in stealing each category of unprotected data. The readers of this blog know that data must be decrypted in order to be used and so would be quick to point out that “universal” encryption in use is, ultimately, a futile dream. There are vendors who, think otherwise, but let’s accept the fact that data will, at some time, be exposed within a computer’s memory. Is that a fault attributable to the data or to the memory and to the programs running in it? Experts say it’s the latter. In-memory attacks are fairly devious, but the solutions are not. Rebooting gets rid of them and antimalware programs that scan memory can find them. So a CFE can ask,” How often is each system rebooted?” and “Does your anti-malware software scan memory?

To the extent that software used for attacks is embedded in the programs themselves, the problem lies in a failure of malware protection or of change management. A CFE need not worry this point; according to my California presenter many auditors (and security professionals) have wrestled with this problem and not solved it either. All a CFE needs to ask is whether anyone would be able to know whether a program had been subverted. An audit of the change management process would often provide a bounty of findings, but would not answer the reviewer’s question. The solution lies in having a version of a program known to be free from flaws (such as newly released code) and an audit trail of

known changes. It’s probably beyond the talents of a typical CFE to generate a hash total using a program as data and then to apply the known changes in order to see if the version running in production matches a recalculated hash total. But it is not beyond the skills of IT expects the CFE can add to her team and for the in-house IM staff responsible keeping their employer’s programs safe. A CFE fraud risk reviewer need only find out if anyone is performing such a check. If not, the CFE can simply conclude and report to the client that no one knows for sure if the client’s programs have been penetrated or not.

Finally, a CFE might want to find out if the environment in which data are processed is even capable of being secured. Ancient software running on hardware or operating systems that have passed their end of life are probably not reliable in that regard. Here again, the CFE need only obtain lists and count. How many programs have not been maintained for, say, five years or more? Which operating systems that are no longer supported are still in use? How much equipment in the data center is more than 10 years old? All this is only a little arithmetic and common sense, not rocket science.

In conclusion, frauds associated with weakened or absent cybersecurity systems are not likely to become a less important feature of the corporate landscape over time. Instead, they are poised to become an increasingly important aspect of doing business for those who create automated applications and solutions, and for those who attempt to safeguard them on the front end and for those who investigate and prosecute crimes against them on the back end. While the ramifications of every cyber fraud prevention decision are broad and diverse, a few basic good practices can be defined which the CFE, the fraud expert, can help any client management implement:

  • Know your fraud risk and what it should be;
  • Be educated in management science and computer technology. Ensure that your education includes basic fraud prevention techniques and associated prevention controls;
  • Know your existing cyber fraud prevention decision model, including the shortcomings of those aspects of the model in current use and develop a schedule to address them;
  • Know your frauds. Understand the common fraud scenarios targeting your industry so that you can act swiftly when confronted with one of them.

We can conclude that the issues involving cybersecurity are many and complex but that CFE’s are equipped  to bring much needed, fraud related experience to any management’s table as part of the team in confronting them.