Tuesday, August 22, 2017

Did artificial intelligence kill the BlackBerry? It did for me.

Recently, Globe and Mail noted in their political briefing newsletter that Samsung's Knox software is deemed to be as secure as the traditional BlackBerry:

"Shared Services Canada, the department in charge of overseeing IT for the federal government, is set to offer alternatives to bureaucrats over the next 18 months as part of “a new approach to mobile service to better serve its clients, use new technology and adapt to changes in the marketplace.” Samsung and its line of Android-powered smartphones was the first to be approved by Shared Services, but only after two years and several tests showed that Samsung’s phones passed military-grade requirements."

In this blog, we've covered Blackberry's steady fall into oblivion. For me, BlackBerry was my first smartphone. I even got excited about the Torch, thinking it was the perfect compromise between the touch screen and the classic keyboard. However, that feeling faded quite quickly after using the device. It was so underpowered and underwhelming compared to the competition.

When looking at the Porter 5 forces that surrounded this once mighty Canadian tech giant, we could say that both Apple's iOS and Android offer better substitute's: better devices, more power, better apps, etc. Essentially, these devices has evolved so much that they bring the power of the PC into the palm of one's head (Samsung's Note 8 is expected to have 6GB of RAM!).   This alone doesn't explain why BlackBerry was ultimately displaced from corporate IT - well before Samsung's Knox became equal-to-BlackBerry in terms of security.

I think there were two key developments that enabled BlackBerry's decline.

The more well known one is the "consumerization of IT" phenomenon: users wanted to use their latest iPhone or Android device instead of the BlackBerry in the corporate environment. Going back to Porter 5 forces this speaks to "bargaining power of buyers": the people no longer wanted to be limited to the "one trick pony" of email and BB Messenger. And they were willing to lobby their corporate IT departments to bring on the Android and Apple devices.

This leads to the second less well known factor.

What allowed consumerization to take place was that Microsoft took an open approach to licensing it Exchange Active Sync. This move paved the way for iPhone and Android to connect their devices to the corporate email server. Microsoft open attitude essentially transferred the power from BlackBerry to the consumer.

But for me it was a little different. Of course I like the apps and GPS that my Android and iPhone bring to me: the ability to read the Kindle, listen to audio books and podcasts without having to carry multiple devices is definitely a productivity. But really it was one particular app that made me able to switch: SwiftKey.

And that's where we get to the artificial intelligence.

I was a big fan of the Blackberry,  primarily because I thought I couldn't live without the physical QWERTY keyboard. But friends who were encouraging me to switch mentioned that Android sported the SwiftKey keyboard which is powered by artificial intelligence. This keyboard is much better for me in terms of learning the words I use when I type than the predictive text feature in its iOS counterpart (which I use for work).

A little while ago, I tried a colleague's BlackBerry and the irony is that my hands hurt. And that's probably because I type a lot less specifically because of the AI approach taken by SwiftKey. As per the native analytics tracker (see graphic below) in my app I have been saved myself over 350,000 taps and am 28% more efficient

Today, over a quarter billion people use SwiftKey on their mobile devices. Although a little hidden from the view of analysts and academics, advances in artificial intelligence enabled SwiftKey (now owned by Microsoft) to offer a substitute for the once dominant BlackBerry physical keyboard. And for me personally it was this little piece of exponential technology (along with the relatively giant landscape of the Samsung Note 1) that convinced me it was time to switch.

And like I said, I can never go back.

Sunday, June 18, 2017

"Bitcoin process flow": Accountant's guide to risk & controls around the blockchain

For the past year, I have been following blockchain to assess how this exponential technology will impact financial auditing.

Unlike artificial intelligence, quantum computing, or virtual reality, this technology addresses the heart of accounting profession: it is an innovation in the process of recording and accounting for transactions. Furthermore, it captures "proof of interaction" by leveraging digital signatures as the basis for executing exchanges. Both these features speaks to the core of what we do as accountants and auditors.

But before we get ahead of ourselves, it is important to look at blockchain in a nuanced way. On the one hand, technologists should be careful about how the blockchain will impact the audit. However, at the same time, the audit profession can't afford to ignore it. To do so would invite the profession to repeat the mistakes of Kodak who, despite inventing the digital camera in 1975, were ultimately disrupted by that very same technology.

Part of the problem was understanding that digital technology was changing at a linear pace instead of an exponential pace. In this post, Peter Diamandis talks about how "30 exponential steps" compares to "30 exponential steps" (and talks more broadly about linear vs exponential thinking). Ray Kurzweil, the infamous Googler, talks about the infamous story of how the inventor of chess requested an exponential amount of rice (and is rumored to have lost his head).

Going back to looking at this as an auditor, I think a useful starting point to understand the topic of blockchain is one of "professional skepticism". Specifically,:

Why would people trust this?

It's been quite the task in trying to understand how the public blockchain, specifically bitcoin, works in disintermediating centralized authorities, such as banks, to settle transactions between two parties that don't know each other. In a sense, a retailer, like Overstock.com, only needs to receive a string of digits, such as:

https://blockchain.info/block/0000000000000002f7de2a020d4934431bf1dc4b75ef11eed2eede55249f0472 and be satisfied that the purchaser has bitcoins required to buy the merchandise and hasn't already spent them. That is, they have "assurance" from the above string of digits and characters that the sender has not already spent the bitcoin or has simultaneously sent the bitcoins to someone else.

Part 1: Background on the process flow

Before going through the walk through, it is important to watch these videos first to get some background on how Bitcoin works.

The following video illustrates the peer-to-peer nature of the ledger:

This video gives a good 5-minute summary delving more into the technical details of bitcoin. If you need more, check out this 22-minute video by the same author.

The following video by Andreas Antonpolous, is especially helpful in understanding how the blockchain works at a deeper level. Encourage watching the whole video, but if you want to get to the meat of how the Proof of Work, SHA hash function works, skip to this point in the video .

As noted in these videos, when you send or receive bitcoins there's no exchange of actual digital code. Rather, it merely updates on the ledgers across the bitcoin network. It's quite ironic for us accountants - the way to bitcoin holdings are really just the sum of the person's bitcoin transactions.

Part 2: Walk-through of a Bitcoin Transaction

I originally mapped out this "walk-through" of a bitcoin transaction in PowerPoint. The transaction is largely based on the book, Mastering Bitcoin, by Andreas Antonopolous (same individual in the video above). He has been nice enough to make a number of the chapters online, including chapter 2, 5 and 8 that I used to develop this flow.

The bitcoin transaction that I used to perform the walk-through is the same one used in the book and it belongs to block #277298. As per the book, Alice sends 0.015 bitcoins to Bob's cafe to buy a coffee.

Step 1: Get a bitcoin wallet and some bitcoin.

For those bold enough to transact in bitcoin, they need to set-up a bitcoin wallet on their computer or mobile phone. Most important of all this needs to be secured as it holds your private key that is used to sign the transactions and send over to others. If this key is compromised, lost, etc. - you will lose all your bitcoins! And unlike credit cards, there is no central authority to complain to if this happens.

If you live in Toronto, you can actually buy the bitcoins at Deloitte at Bay and Adelaide (but you will need to set-up your digital wallet before doing this).

It cannot be overstated enough that this is where the bulk of security issues occur and makes bitcoin prone to hacking. As noted in this article, the August 2016 hack of Bitfinex had to do with the way that the actual wallets are secured using multi-signature wallets where multiple parties (user, Bitfinex, and Bitgo) held the keys. It should be clear, however, that it's not the actual ledger that is being hacked or more accurately being modified. Instead, it's the encryption keys that are being stolen by the attackers.

How did the thieves access the funds given that the ledger is reporting all transactions publicly?

This article from the Verge gives some insights on how bitcoins can be effectively laundered out of the blockchain.

Step 2: Send bitcoins to the recipient.

Process: In this example, Alice is sending the bitcoins to Bob's public key "1Cdid9KFAaatwczBwBttQcw XYCpvK8h7FK", which is also known as his bitcoin address.

If you want to wade into the details as to how the transaction is set-up and transmitted check out these two posts (here and here) by Google engineer, Ken Shirrif.

Risks:  Unauthorized recipient is sent the bitcoin. Unauthorized user modifies the payments.

Controls: Public key-cryptography: As noted in the process, Alice must send the bitcoin to Bob's bitcoin address or his public key. As long as she is 100% sure that it is actually Bob's address then only Bob will be able to access those bitcoins. In this scenario, Alice will likely scan Bob's QR since she is buying the coffee from him. However, if this were an online transaction then she would need to use an alternative method to verify that she is sending her bitcoins to the right address. PKI also ensures that the message can't be altered.

Step 3: Generate the transaction ID

Process:After the bitcoins are sent to the recipient a transaction identification number is generated, which in this case is “7957a35fe64f80d234d76d83a2a8f1a0d8149a41d81de548f0a65a8a999f6f18”.

Risks: Transaction will not be properly identified.

Control: Each bitcoin transaction is uniquely identified by transaction identification.

Step 4: Perform checks at the node

Process: Transaction is captured by the initial node. Risk:Transaction will be invalid, incomplete, incorrectly formatted, or violate other rules within the bitcoin protocol. See below for how these controls would be classified as “input edit controls” or data validation routine.

Risks: Inaccurate, invalid or incomplete transaction or transaction details will be posted to the blockchain.

Controls: The following list of controls are taken verbatim from chapter 8 Antonopolous's book mentioned earlier (or click here to see "Independent Verification of Transactions" in chapter 8)

Validity checks.The real genius of bitcoin is that it ensures that the person sending you the bitcoin already has them. In other words, it’s provide comfort on the existence assertion – to potential vendor or other person that will receive those bitcoins. With respect to data validations, it provides the following checks:· None of the inputs have hash=0, N=–1 (coinbase transactions should not be relayed).
  • A matching transaction in the pool, or in a block in the main branch, must exist.
  • For each input, if the referenced output exists in any other transaction in the pool, the transaction must be rejected.
  • For each input, look in the main branch and the transaction pool to find the referenced output transaction. If the output transaction is missing for any input, this will be an orphan transaction. Add to the orphan transactions pool, if a matching transaction is not already in the pool.
  • For each input, if the referenced output transaction is a coinbase output, it must have at least COINBASE_MATURITY (100) confirmations.
  • For each input, the referenced output must exist and cannot already be spent.
  • Reject if transaction fee would be too low to get into an empty block.
  • The unlocking scripts for each input must validate against the corresponding output locking scripts.
System-based validation. The following is a general data validation that ensures that the transaction is formatted per the bitcoin rules. As per Ken Shirrif’s post, noted in step 2, Bitcoin is very unforgiving when it comes to processing transactions: any inconsistencies with the protocol will result in the transaction being rejected.
  • The transaction’s syntax and data structure must be correct. 
Completeness check. The following data validation ensure that the transaction is complete:
  • Neither lists of inputs or outputs are empty.
Limit checks. The following data validation rules ensure that transaction submitted for processing do not exceed the limit set by the Bitcoin protocol:·
  • The transaction size in bytes is less than MAX_BLOCK_SIZE.
  • The transaction size in bytes is greater than or equal to 100.
  • The number of signature operations contained in the transaction is less than the signature operation limit
Logical relationship checks. The following data validation routines ensure that values match. The second one is similar to the idea that underpins accounting of where the debit equals the credit.

  • The unlocking script (scriptSig) can only push numbers on the stack, and the locking script (scriptPubkey) must match isStandard forms (this rejects "nonstandard" transactions)
  • Reject if the sum of input values is less than sum of output values.

Range checks. The following controls ensure that the values submitted are within an acceptable range. The last one is what prohibits the mining of coins beyond the 21 million limit set by the protocol:
  • nLockTime is less than or equal to INT_MAX.
  • Each output value, as well as the total, must be within the allowed range of values (less than 21m coins, more than 0).
  • Using the referenced output transactions to get input values, check that each input value, as well as the sum, are in the allowed range of values (less than 21m coins, more than 0).

Step 5: Accept or reject the transaction 

Process: If the transaction meets the criteria then it is passed on to the miners to be mined in a block. Otherwise the transaction is rejected.

Risk/Control: this is a flow through from the previous step.

Step 6: Send transaction to be mined

Process: The transaction is then sent to a pool to be mined. The protocol looks to have the transaction mined within 10 minutes. When the sender submits the transaction to the recipient, they can add fees to be paid to the miners. However, those do not give fees are of a lower priority than the people that actually paying to have their transactions processed. Right now this is not critical as the main reward is getting awarded 12.5 bitcoins for mining (i.e. guessing the correct nonce which is discussed below).  When bitcoins run out in 2040, however, it is these transaction fees that will become the main “remuneration” for the miners. 

Risk: Miners incentives will not be aligned with verifying transactions. 

Control: The economic incentives give the miners a reason not to counterfeit. It is less work to actually mine the coin then try to counterfeit the coin by amassing the necessary computer power. Also, the problem for profit-seeking criminals is that once they counterfeit the coins (e.g. through the 51% attack) then the community would lose faith in the bitcoin making it worthless. However, this does not stop non-profit seeking parties who are looking for a challenge or to destroy the bitcoin platform. 

Step 7: Pool the transaction with other transactions to be mined.

Process: As you can see from this list of transactions, transaction ID "7957a35fe64f80d..." is just one of the many transactions that are pooled together to be mined (i.e. checked) and then added to the blockchain ledger. You can try to find the transaction by going to the link, hitting ctrl-F and pasting in the first few digits of the transaction.

Risk and Controls: NA

Step 8: Protocol uses Merkel-Tree structure to hash transactions

Process: What I found challenging was to understand how the header hash (i.e. this) links to the actual transaction (i.e. this). And that’s where my journey took me to Merkle Tree structures. What Merkle trees allow you to do is recursive hashing that combines transactions recursively into the root hash as follows:

(Taken from: here)

Risk: Any node can verify the integrity of the blockchain by downloading the full blockchain ledger and ensuring that one block is linked to the previous block. However, to do this you need about 100 GB and a few days to download the blockchain. Consequently, there is a potential risk that mobile devices - which are used by most to execute bitcoin transactions - is unable to do this verification because it lacks the storage capacity and processing power to verify the blockchain.

Control: The use of Merkle Roots enables the verification of bitcoin transactions on small devices such as smartphones. Unlike a computer that has sufficient storage, these devices can simply use merkle paths to verify the transactions instead. Using Simplified Payment Verification, the bitcoin protocol, enables you to verify that the transaction is part of this root in order to get comfort that it is part of the block that has been checked and added to the blockchain. This structure also protects the pseudonymity of the other transactions as it doesn't require decrypting the other transaction in the tree structure. This control, however, relies ultimately the overall blockchain is being verified by network and does not standalone.

Step 9: Combining the hash transactions with previous block

Process: Miners need to generate the header of the blockhash, which consists of the previous hash, the merkle root of the current set of transactions, as well as the nonce (see step 10 and 11)

Risk: Transactions will be modified in an unauthorized manner.

Control: This is what effectively puts the "chain" in blockchain. It’s ultimately this structure that prevents transactions that have been added to the ledger from being modified. So let’s say you want to alter transactions that were added 1 hour ago (remember: it takes 10 minutes to add a block of transactions) you have to change the following:
- Merkle root of the hash of that transaction that was added 60 minutes ago.
- The header hash of the transaction of the block that was added 50 minutes ago.
- The header hash of the transaction of the block that was added 40 minutes ago.
- The header hash of the transaction of the block that was added 30 minutes ago.
- The header hash of the transaction of the block that was added 20 minutes ago.
- The header hash of the transaction of the block that was added 10 minutes ago.


Because each hash is based on the hash of that transaction that was added an hour ago. Any modification of that hash alters each of the 5 blocks that comes after that. Each block of the 5 block’s data structure depends on that hash-value of that transaction you want to modify.

Step 10: Setting the Difficulty/Target to identify the nonce

Process: The difficulty is actually set by the peer-to-peer system itself reviewing that the average time for the last 2016 blocks was 10 minutes on average. If not, then the difficulty will be adjusted up or down to get to the 10 minute average.

Risk: Transactions will be mined in an untimely manner; i.e. more or less than 10 minutes.

Control: The difficulty/target effectively as a throttle to ensure that the blocks mined takes 10 minutes regardless the number the miners or the computers involved(i.e. which will continually fluctuate). What the target determines is the level of guessing that the miners have to do find the "nonce" (see next step). The lower the target the more difficult it is to guess that number because there are possibilities of the answer being correct.

Antonopoulos, in Mastering Bitcoin, gives the following analogy:

"To give a simple analogy, imagine a game where players throw a pair of dice repeatedly, trying to throw less than a specified target. In the first round, the target is 12. Unless you throw double-six, you win. In the next round the target is 11. Players must throw 10 or less to win, again an easy task. Let’s say a few rounds later the target is down to 5. Now, more than half the dice throws will add up to more than 5 and therefore be invalid. It takes exponentially more dice throws to win, the lower the target gets. Eventually, when the target is 2 (the minimum possible), only one throw out of every 36, or 2% of them, will produce a winning result."

Step 11: Produce the header hash, i.e. the proof of work

Process: The miners "brute force" (rapidly guess) what the right value of the nonce is to get the hash. The miners keep iterating the nonce, producing the hash, and checking if it matches the desired header hash. The series flows above is meant to illustrate the iterative process the miner goes through. If the miner guesses the right hash, they will be awarded the Block Award of 12.5 bitcoins. This reward halves every 4 years and there will only be 21 million bitcoins issued. The last bitcoin will be mined in 2140.  

  1. Malicious actor controlling 51% of the network could authorize fraudulent transactions.
  2. People will not sign up to be miner without sufficient reward for their effort
  3. Infinite supply of bitcoins would expose the currency to inflation risks, i.e. if a bitcoins are mined endlessly the exiting bitcoins would decrease in value. 

Mitigating Risk 1: As noted in the process above, the miners have to brute-force the nonce and therefore expend energy. In fact, "electricity makes up between 90 and 95 percent of bitcoin mining costs". That means miners have to invest capital, effort and energy to actually mine the bitcoin. As noted earlier, this investment ties the miner to the success of bitcoin. That is, they won't want to hack bitcoin as it would drive the value down. On the capital side, miners buy specialized equipment called "rigs" to mine bitcoin:

Mitigating Risks 2 & 3: The bitcoin reward provides the incentives to the miners to create the header hash that has the necessary elements. While the 21 million cap on bitcoins, actually makes the currency deflationary. As bitcoins get deleted or become inaccessible because some can't remember the password to their digital wallet - those bitcoins are gone forever. Consequently, the total amount of bitcoin in circulation will be less than 21 million. 

Step 12: The block is time stamped 

Process: Timestamps are embedded in every calculation involved in generating the block. This makes the blockchain “immutable” as malicious actors can’t change previous blocks, especially after 6 blocks have been added to that block (i.e. which is why online retailers wait 60 minutes before accepting payment)

Risk & Control:  As noted in Step 9,  the blockchain concept of linking one blockchain to another is a sequence is one of the key controls to ensure that transactions will be modified in an unauthorized manner. Such a control is dependent on the timestamp as noted in the process section.

Step 13: Block is propagated across the network.

Process: Other nodes check the hash by running it through the SHA-256 hash function and confirm that the miner has properly checked the transaction. If more the 51% agree, then it is accepted as valid and added to shared blockchain ledger and it will become part of the immutable record.

Risk: If miners added the block to themselves, they would have both access to gaining the asset (i.e. the bitcoin) and access to the ledger itself. 

Control: The bitcoin network effectively segregates incompatible functions by requiring 51% of the network to agree that the work performed was valid. That is, a block cannot become part of the blockchain ledger until the majority of the network reviews the work performed by the miners. 

Hopefully, this has clarified some of the nagging questions you've had about how the bitcoin blockchain enables trust through a decentralized peer-to-peer network.  That being said, the above flowchart has been quite the labour of love for the past few months. So there will be quite a few gaps! Special thanks to Andreas Antonopoulos, who although I have never met, has made this journey a lot easier by making his work available online.

Please email me at  malik [at] auvenir.com if you have any comments, questions, or notice any gaps.

Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the way we do financial audits. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir, Deloitte's or anyone else. 

Wednesday, May 17, 2017

Will auditors go the way of horses?

In late 2015, MIT Professors Erik Brynjolfsson and Andrew Mcafee penned an article entitled, will "Humans go the way of horse labour?"

The article explores how the mechanization of farm labour serves as a model of exploring the automation of knowledge work citing the work of Nobel Prize-winning economist Wassily Leontief. They state:

"In 1983, the Nobel Prize-winning economist Wassily Leontief brought the debate into sharp relief through a clever comparison of humans and horses. For many decades, horse labor appeared impervious to technological change. Even as the telegraph supplanted the Pony Express and railroads replaced the stagecoach and the Conestoga wagon, the U.S. equine population grew seemingly without end, increasing sixfold between 1840 and 1900 to more than 21 million horses and mules. The animals were vital not only on farms but also in the country’s rapidly growing urban centers.

But then, with the introduction and spread of the internal combustion engine, the trend rapidly reversed. As engines found their way into automobiles in the city and tractors in the countryside, horses became largely irrelevant. By 1960, the U.S. counted just 3 million horses, a decline of nearly 88 percent in just over half a century. If there had been a debate in the early 1900s about the fate of the horse in the face of new industrial technologies, someone might have formulated a “lump of equine labor fallacy,” based on the animal’s resilience up till then. But the fallacy itself would soon be proved false: Once the right technology came along, most horses were doomed as labor."

The MIT Professors are not alone in sounding the alarm when it comes to how automation can impact labour. Others includes Thomas Piketty, Douglas Rushkoff, Martin Ford and Nick Carr. 

If the techno-distopians are right, then there will need to be a fundamental alteration of the way the economic system is structured to address the unemployed masses. Such masses are not likely going to take such things lying down. For example, in response to the Great Depression there were mass demonstrations in Washington DC where thousands protested their plight. In January 1932, Cox's Army of 25,000 assembled in the capital to protest their poverty. Later that year, the Bonus Army of 43,000 marched on Washington in the summer to demand the US government pay the bonus promised early:

Alternatively, if the techno-utopians are right, such as Peter Diamandis and others at Singularity university, then such  protests won't be necessary: the system will make changes proactively to ensure that the gains made from exponential technologies are made available to the majority.

The point is that either way actions must occur at the political level to make the changes necessary to  address the deeply embedded economic architecture.

Consequently, working within the status quo leads to one actionable option: "Race with the Machine".

Prior to penning the article I cited above, MIT Professors Erik Brynjolfsson And Andrew Mcafee proposed that the path forward requires "man and machine" to work together:

This is essentially how IBM's cognitive system, Watson, was positioned when it comes to doctors and medicine: doctors delegate the task treatment research to Watson, while they determine what is the right treatment for their cancer patients. For example, doctors and Watson were able to work together and determine what the correct treatment was for a 60 year old Japanese patient

How can this be applied to financial audit? 

Firstly, the scope of the audit is driven by optimizing the cost-benefit curve. Consequently, there is a potential to get greater assurance for the same amount of resources allocated. Keep in mind that if auditors had to audit all transactions,  the organization could go bankrupt just trying pay the audit bill. Consequently, auditors only look at transaction on a test basis.

However, with the increased datafication of an organization's interactions with stakeholders, there is an opportunity - that didn't previously exist - to analyze these interactions for audit insights.

Take for example a Business to Consumer (B2C) company, like Dell, that interacts with its customers via social media. In 2005, there was an infamous spat between a CUNY journalism professor, Jeff Jarvis, and Dell computers (original post here). Jarvis was irate over the customer service and has been an Apple customer since. Such conversations can be mined for potential audit implications. In this particular instance, it could be a means to assess the adequacy of the sales returns allowance - developing a model based on how many other customers have complained via blogs, twitter or other social media about the B2C company and then assessing whether the provision is adequate.

Previously, such an analysis would be cost prohibitive and wouldn't make sense for the auditor to even considering such a thing. For example, the B2C company would need to record all conversations and then have auditor listen to thousands of hours of conversations to see whether such an issue actually exists.

This is not to say that it is currently feasible to run such an analysis.  Tools that aggregate, standardize and analyze such unstructured text could be argued to be in their infancy. However, datafication combined with further advances in social analytic tools (see video below for an example) in is the first step to a world where such analysis could be feasible.

The second separate but related issue is the role of the regulators in opening or closing the gate on innovation.

Some may mistakenly believe that this due to the regulated nature of audit. However, audit is not the only arena where innovation is shaped by the “regulator”. In fact, the success or failure of innovation  depends on how the incumbents who govern the landscape make way for the new technology (or not).

Take for example the rise of the iPhone in the corporate environment. What allowed consumerization to take place (i.e. allowing users to connect their favourite smartphone devices to the network instead of the corporate devices) was that Microsoft took an open approach to licensing it Exchange Active Sync. They could have created a walled garden that allowed Windows Phone only to connect to their email server, however, they paved the way for iPhone and Android to connect their devices to the corporate email server. Microsoft as the "regulator" of which mobile device can connect to its mail server enabled the iPhone and Android to displace our beloved BlackBerries from the corporate environment. Had Microsoft saw more profit in walling off the market for its own devices the ability for Apple iDevice to disrupt corporate IT would have been stifled if not suffocated.

On the opposite side, David Sarnoff of RCA squashed FM radio in order to protect his AM Radio technology and pave the way for television. The inventor, Edwin Armstrong, who initially was Sarnoff's friend, had mistakenly shared his technological innovations with him only to be betrayed by him. FM Radio technology had the potential to share data, such as faxes, back in the 1930s. One can only imagine the state of the wireless technology had RCA allowed this technology to flourish. 

Similarly, in 1934, AT&T blocked the answering machine for fear that it would undermine their business because "ability to record voice would cause business people to shun the telephone for fear of having their conversations recorded". So although much innovation came out of AT&T's Bell labs, the point is that it was effectively acting as the "regulator" which determined which innovations were permitted in the telecommunications industry and which ones were not. 

Consequently, the regulators (e.g. SEC, PCAOB, AICPA, etc.) will have a significant role to play on how innovation will unfold with the arena of audit. It is ultimately they who are going to weigh and assess what constitutes reasonable assurance actually is.  

Where are the regulators currently at? 

Well it seems that they are looking to technology to actually improve audit quality. In a May 2017 speech, PCAOB Board Member Jeanette M. Franzel noted in the section "Impact of Technology on Audit" that:

"If managed and implemented properly, these developments have the potential to enhance the value of the audit process and increase audit quality." [emphasis added]

To be sure it's not all rainbows and unicorns. Board Member Franzel did see "potentially disruptive changes will present challenges and threats across the auditing profession". However, at least there is an appetite to explore how such technologies can improve audit quality, expand what more can be done within audits and enable auditors to race with the machine.

Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the way we do financial audits. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir, Deloitte's or anyone else. 

Monday, April 10, 2017

[Update] Do 2 non-CPA audits equal 1 CPA audit? Zcash gets non-audit firms to issue audit reports.

Last year, Zcash went live.

What is Zcash?

Zcash is a public blockchain similar to bitcoin. Zooko Wilcox, the founder of Zcash, explains what it is in the following video:

As he notes in the video, what distinguishes Zcash from bitcoin is that it offers greater privacy of the users as they don't have to disclose their private key (which is a pre-requisite for bitcoin). However, because Zcash uses zero knowledge proofs (see the amazingly easy to follow explanation below), there is no need for the private key to be revealed - thereby offering extra anonymity to the user.

However, what I thought was exceptional noteworthy about the Zcash is how it went about proving to the world that its code is sound. When Zcash went live, Coindesk reported the following:

"Notably, the development team released two audits conducted by NCC Group and Coinspect, respectively, ahead of the launch.

The reports sought to identify potentially harmful bugs in the cryptocurrency's code prior to launch. (The audits can be found here and here)."
The article referenced, a blogpost, which described the scope of the security audits as follows:

"Today we are publishing the final reports of each external security auditor we contracted this summer to review our code. We've triaged the issues found and addressed any we considered severe (e.g. could compromise user privacy, lose funds, break consensus, etc...).

NCC Group's conclusion was (also available here):

“NCC Group performed a two-part targeted review of the Zcash cryptocurrency implementation. The first part, performed by the Group's Cryptography Services practice, focused on validating that Zcash's implementation adhered to the Zcash Protocol Specification. An assessment looking for security errors within the cryptographic implementation was also performed. The second part was a C++ source code review for vulnerabilities using static and dynamic analysis and fuzz testing. The review also included a cursory assessment of dependent libraries and recommendations for improving software assurance practices at Zcash.

NCC Group identified an issue that would allow an adversary to tamper with the verification and proving keys used by the Zcash daemon as well as a number of C++ coding errors that could result in stack-based buffer overflows, data races, memory use-after-free issues, memory leaks, and other potentially exploitable runtime error conditions. Additionally, most, if not all, third-party open source library dependencies were identified as being out-of-date. In the end, NCC Group did not find any critical severity issues that would undermine the integrity of the Zcash blockchain or undermine the security of confidential transactions during the time that the review was conducted (from August 8 – September 2, 2016).”

As for Coinspet, they noted (also available here): 

"Coinspect reviewed Zcash's innovations over the Bitcoin Core source code, focused on evaluating its resistance against specific threats to cryptocurrencies. Coinspect identified high-risk and moderate-risk issues during the assessment that affected the performance and availability of the Zcash p2p network. The security issues identified did not allow remote code execution nor allowed an attacker to steal funds or compromise the privacy of Zcash users. However we found exploitable 51% and isolation attacks with minimum resources.

It is an honor for Coinspect to contribute with our cryptocurrency security experience to the exceptional team behind this exciting project."

What I thought was interesting, was a couple of things.

Firstly, these are purely tech experts, not CPAs. They are producing "audit reports" that users will rely on for privacy, ability for the protocol to generate consensus, and loss of funds. 

Of course, these are all things that a CPA firm couldn't opine on such things because the liability would be too much for the firm to bear.

But I think that's the point: if things are so complex/risky that a CPA firm can't produce the audit report, it leaves the field wild open for competitors like Coinspect and NCC Group (who were likely paid $250,000).

And is the twist, that they retained 2 or 3 firms to do this. I think that's the real interesting part. 

Audits completed by CPA are governed by strict standards of independence to ensure that the auditors are independent.  However, what Zcash is in effect saying that such issues can be overcome by getting two "unlicensed" auditors to opine on the same thing. Implicitly, why would the two independent parties collude on a lie? 

Initially Zcash as a cryptocurrency was not doing so well price-wise. When this post was originally written (on Dec 23rd) there were 188,905 transactions executed on this by blockchain. Today, roughly 3 months later on April 10th, the transaction count has more than doubled to 463,560. Furthermore, it is now the 9th most popular by market capitalization.

Te world of cryptocurrency is not as conservative world of financial statements. However, the approach that Zcash to gain trust essentially. Although we can have philosophical debates on whether this meets GAAS or not, the reality is someone has found a way to eat our lunch. 

Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the way we do financial audits. 

Saturday, April 1, 2017

Cafe X and Amazon Go: Auditing a robot-operated store?

By now you've probably heard of the robot-barista - Cafe X.  If not check out this video from Wired, where David Pierce walks us not only through how the robot will make your latte, but why he thinks it better than the human alternative:

Amazing isn't it?

In a presentation I did last year on how these forces of automation could impact auditing & accounting, I noted it's easier to see how technology disrupts someone other than you.

And so it looks like baristas have met their match.

As Pierce notes in the video, the inconvenience of dealing with imperfect people is something that most people want to avoid in the rat-race we live in: who wants the barista to remake your coffee 11 times as he says? ;) 

The Wired article also notes that Cafe X is 'high-quality at a cheaper price': 

"Surprisingly delicious coffee, starting at $2.25—cheaper than you’d find at Sightglass or even Starbucks. Cafe X’s location in the corner of the Metreon may not entice you out of your daily routine."

Amazon Go: Walkthrough Technology 
Amazon has also wowed the "techthusiasts" out there with their cashier-less store concept:

In the FAQ section, Amazon summarizes how this cashier-less store works:

"Our checkout-free shopping experience is made possible by the same types of technologies used in self-driving cars: computer vision, sensor fusion, and deep learning. Our Just Walk Out Technology automatically detects when products are taken from or returned to the shelves and keeps track of them in a virtual cart. When you’re done shopping, you can just leave the store. Shortly after, we’ll charge your Amazon account and send you a receipt."

Although this has the potential to revolutionize retail, Amazon has experienced some setbacks of late. The store can allegedly only handle 20 people at a time. So there maybe some kinks to work out before this goes mainstream.

Obviously, this could have a massive impact on entry level jobs: most of us who were young a while ago relied on these McJobs for spending money and funding our college/university tuition. They also gave students some practical work experience to help land a career accounting profession ;)

But let's save this discussion for a future post.

How would you audit cashier-less stores, like Cafe X or Amazon Go?

The retail industry has been a manual intensive industry that requires cashiers, stock room personnel and the like. Such a process naturally requires policies and procedures (aka internal controls) that ensure that merchandise makes it from the shelf to the cash register and into the customers possession. And there are those anti-theft mechanisms to prevent shoplifting as well. In the industry, "shrinkage", the amount of merchandise that is stolen, robbed, damaged, etc, is estimated by the National Retail Federation to be 1.38% of sales or $45.2 billion for 2015.

Cafe X and Amazon Go offer a glimpse into how automating traditional businesses can alter these fundamental risks that impact the way we go about conducting our financial audits.

With Cafe X, shrinkage is almost eliminated as there is no humans involved in the production process. Once the kiosk is loaded up with cups, coffee, syrup, sugar, milk, etc. the system is essentially fully automated - no manual intervention by baristas or customers.

Amazon Go, on the other hand, uses a whole lot of automation that is watching and analyze every move of the customers (and employees) throughout the store. Consequently, this would not be the store to steal from! And let's not forget Amazon is experimenting with those drones and are we really sure that they are unarmed?

Given this level of automation of the actual business process and controls, could auditors stick to the tried, tested and true retail audit procedures? Or would this enable a more automated approach?

I was directly involved with the recent test-audit of the blockchain involving loyalty points. One of the realities of auditing such exponential technologies is that it makes controls testing a must. For example, for the financial auditor to rely on the digital signatures there needs to be some testing around the wallets to ensure that the signatures are reliable.

Consequently, testing such automated stores would require either a SOC2 or modified SOC report to meet the needs of such a store. For example, the SOC2 would need to have some way of having comfort of how the stock and inventory gets loaded into the store. Likely the auditor would rely on the automated process which the store uses to replenish stock, but it's that hand off between the delivery person (assuming it's still human) that would be the area there is a risk of shrinkage. For example, how does legitimately damaged inventory get accounted for at that point? Whatever process and controls Amazon/Cafe X put in place would need to be tested from a controls perspective.

For the substantive component, I think that's where things get interesting: enter the "embedded audit module". This concept has been around since at least 1989. The idea is that the auditor installs independent software onto the client's system and then transmits it back to the auditor, who uses it as a basis for conducting the necessary audit procedures and tests. The core idea is that the auditor has full control over such a system and the client cannot tamper with the code.

What would be relatively straightforward would be the data capture-component: sales data, stock data, spoilage, etc. would be uploaded from the automated store right into the auditor's system. But this then requires the additional step of verifying the data to independent source documents (e.g. invoices, purchase orders, etc.). In other words, the audit procedure would still require manual intervention as the auditee would need to send this information back to the auditor to complete their audit.

Where I think the audit innovation would be is exploring how video footage can act as a substitute for physical/direct observation by the auditor. That is, could the auditor install a video camera in the automated store as a part of the EAM that would then act as actual independent audit evidence of the actual sale or purchase? For example, in the Cafe X example the auditor could actually use the footage and the visual software to count the cups sold that day and reconcile that to the sales data transmitted back from the EAM for the day?

Although one can argue such transactions are not material and therefore such procedures are overkill.

However, I think now is the right time to conduct experiments and test audits to see whether we can reinvent the classic audit to meet the technology of today. In a future post, we will explore what this means broadly for jobs and more specifically how this could impact the profession.

Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the way we do financial audits. 

Thursday, January 12, 2017

Change Management: Norway's switch to Digital Radio

Norway is making the switch: moving from FM Radio to digital audio broadcasting (DAB).

As reported in the Local, an English Norwegian news site, Ole Jørgen Torvmark, the head of Digitalradio Norge, (jointly owned by the private and public radio stations):

"The big difference and the main reason behind this big technological shift is that we want to offer a better radio service to the whole population."

The article also notes that FM can only support 5 national stations, whereas DAB can support 22 national stations and 20 smaller ones. Furthermore, they make the case it is:
  • Cheaper: Will cost an eighth of FM.
  • Better: Better coverage, ability to catch up on programs.
  • Faster: Easier to get Emergency messages out.
However, not all are happy. According to the WSJ, 2/3rds of people are actually against the move. The Local noted that people are not pleased about paying the extra money for getting the new radio to receive the signals - despite the advertised benefits.

For those interested in the technology behind AM and FM radio check out this:

But for more on the challenges of abandoning this decades old technology check the following BBC report:

As any technology professional knows, one of the most difficult aspects of making change is the people aspect of the technology. For example, Norwegians would be collectively better off if the switched to DAB as the overall cost of operating radio would be much cheaper.

But is that good enough for people to pay the costs for getting a new radio?

It's important to recognize that people need more than cold facts to be positive towards change. Organizations that need to make such changes - technology or otherwise - need to also address the emotional nature of people by addressing the fear, uncertainty and doubt that comes along with such change. 

Monday, January 9, 2017

SEC and Whistleblowers: Can robots come to the rescue?

Saw this following news alert from AccountingToday:

"The Securities and Exchange Commission announced that it had awarded more than $5.5 million to a whistleblower. According to the SEC, the whistleblower directly reported critical information to the commission about an ongoing scheme at their workplace, and that led to a successful enforcement action..."

The article also gives some useful stats on the number of whistle-blowers coming out and the total number of payouts, so check it out.

This is good news in terms of promoting the idea of speaking truth to power. Without such assistance it can be quite difficult to encourage whisteblowing.

We often have a romantic notion of what it is like to tell the truth when there is a drive by all of those around us to commit fraud. Too many Hollywood blockbusters make us believe, falsely, that there is always a happy ending where the good guys win.

For a reality check, we should take a look at Alayne Fleischmann's ordeal in attempting to blow the whistle on the mortgage fraud at Jamie Dimon's JP Morgan Chase. As Rolling Stone's Matt Taibbi notes:

"Fleischmann...had to struggle to find work despite some striking skills and qualifications, a common symptom of a not-so-common condition called being a whistle-blower...Thanks to a confidentiality agreement, she's kept her mouth shut since then. "My closest family and friends don't know what I've been living with," she says. "Even my brother will only find out for the first time when he sees this interview."

As she notes in the video below, the reality of such environments is that there is subordination of the "compliance" functionsto enable the fraud to go through (e.g. the Due Diligence manager got angry when people thought that the loans were bad), lack of effective segregation of duties (e.g. sales people were involved in the due diligence review), and other issues:

Can robots come to the rescue?

When looking at process automation more broadly, we see that one of the "side benefits" is compliance. For example, when library loans out e-books they are never returned late as the patron's access to the digital copy on the reading device is removed right on the due date. Similarly, with autonomous vehicles they never speed, fail to complete to a full stop and the like.

Insurance companies have attempted to use what we can call "compliance tech" by offering drivers a discount for good driving if they are willing to install a monitoring device in their car. As noted in the CBC article, Desjardin Insurance has noted that 7000 people have for this offer which they call Ajusto. As can be seen in the video, Ajusto also leverages gamification and social to promote this program.

Although they have promised that such technology can't be used to penalize the driver, many skeptics are not sure that it will turn out that. For example,  Leonard Kunka, a motor vehicle litigation lawyer, notes:

"It's an invasive technology. It provides a lot more information than insurers currently have to set premiums, and I question whether it's any better than what the insurers use today to set premiums, which is a person's driving record and their history of collisions and accidents."

In other words, can we expect the insurance companies to maintain rates when they can "see" the driver constantly breaking speed limits? Conversely, can we expect them to lower rates when they see that people can drive safely above the speed limits?

Although I doubt it, the reality of such compliance-tech is that it is only used by people who are already compliant: the others who are not compliant would not sign-up for such technology and even if they did would somehow subvert it - as we saw with the whole Volkswagen emission debacle:

"In the test mode, the cars are fully compliant with all federal emissions levels. But when driving normally, the computer switches to a separate mode—significantly changing the fuel pressure, injection timing, exhaust-gas recirculation, and, in models with AdBlue, the amount of urea fluid sprayed into the exhaust. While this mode likely delivers higher mileage and power, it also permits heavier nitrogen-oxide emissions (NOx)—a smog-forming pollutant linked to lung cancer—up to 40 times higher than the federal limit. That doesn’t mean every TDI is pumping 40 times as much NOx as it should. Some cars may emit just a few times over the limit, depending on driving style and load."
Ultimately, technology is only good as the people that support it and so we can't abdicate such responsibility to technology. Instead, we need to continue to encourage people morally and financially to speak the truth when the see things go awry.