Wednesday, March 28, 2018

Audit, Audit, Audit harked Mark: Can CPAs come to Facebook's rescue?

In an investigation by the Guardian and the New York Times, the alleged misdeeds of Cambridge Analytica were revealed.

As noted in the Guardian article:

"Christopher Wylie, who worked with a Cambridge University academic to obtain the data, told the Observer: “We exploited Facebook to harvest millions of people’s profiles. And built models to exploit what we knew about them and target their inner demons. That was the basis the entire company was built on.”... Documents seen by the Observer, and confirmed by a Facebook statement, show that by late 2015 the company had found out that information had been harvested on an unprecedented scale. However, at the time it failed to alert users and took only limited steps to recover and secure the private information of more than 50 million individuals."

The following video from TheVerge sums up the issue:



Although such allegations have received attention (in my opinion due to the association with Trump's campaign), the reality is that these allegations against Facebook are actually not new and reported in both the Intercept in early 2017 and the Guardian way back in 2015. 

There was an ensuing backlash (as noted in the video above and here) that forced Facebook CEO, Mark Zuckerberg to respond. He both had a written response and gave the following interview on CNN:



During the CNN interview, he mentioned the word "audit" 3 times[emphasis added]:
  • "So we're going to go now and investigate every app that has access to a large amount of information from before we locked down our platform. And if we detect any suspicious activity, we're going to do a full forensic audit"
  • "And we're now not just going to take people's word for it when they give us a legal certification, but if we see anything suspicious, which I think there probably were signs in this case that we could have looked into, we're going to do a full forensic audit."
  • "We know how much -- how many people were using those services, and we can look at the patterns of their data requests. And based on that, we think we'll have a pretty clear sense of whether anyone was doing anything abnormal, and we'll be able to do a full audit of anyone who is questionable."
Can CPAs come to Mark's rescue? 
Zuckerberg's repetitive use of the word audit should be read in conjunction with his "welcoming" of regulation:

"I actually am not sure we shouldn't be regulated. You know, I think in general, technology is an increasingly important trend in the world, and I actually think the question is more what is the right regulation rather than yes or no, should it be regulated?"

Zuckerberg would not be the first tech giant to opt for regulation as a business strategy.

In Tim Wu's Master Switch, Theodore Veil also advocated for the concept of a regulated monopoly in the arena of telephones:

"[Theodore] Vail died in 1920 at age 74, shortly after resigning as AT&T's president, but by that time, his life's work was done. The Bell system had uncontested domination of American telephony, and long-distance communication was unified according to his vision. The idea of an open, competitive system had lost out to AT&T's conception of an enlightened, licensed, and regulated monopoly. AT&T would remain in this form until the 1980s, and it would return in not so substantially different form in the 2000s. As historian Milton Mueller writes, Vail had completed the "political and ideological victory of the regulated monopoly paradigm, advanced under the banner of universal service."" [emphasis added]

As Tim points out in his book, the move enabled AT&T didn't always use their monopolistic powers for good. They charged high long distance rates and even stifled innovation suppressing the answering machine due to potential conflict with its main business.

Regardless, it shows that Facebook could be an early advocate for CPAs offering privacy related assurance services around its algorithms.

AlgoTrust: A new service offering for CPAs? 
The concept of AlgoTrust is something I have previously discussed in this post.

The idea actually has support from multiple angles not least of which of comes from information security expert, Bruce Schneier:

"...it is also worth noting that there are other experts who hold that algorithms - from a privacy perspective - need to be regulated. Bruce Schneier, a well-known information security expert who helped review the Snowden documents, in his latest book, Data and Goliath ... also calls for "auditing algorithms for fairness". He also notes that such audits don't need to make the algorithms public, which is it the same way financial statements of public companies are audited today. This keeps a balance between confidentiality and public confidence in the company's use of our data."

Big Data versus Privacy: The monetization paradox
Such an algo-audit could leverage the work done by AICPA and CPA Canada in the realm of privacy, specifically the Generally Accepted Privacy Principles. That being said, privacy audits have been a hard sell in the past. But what distinguishes the service here is that it would be auditing the algorithm for compliance with privacy "regulations".The reason regulations need to be put in quotes is that in substance privacy legislation is effectively eliminated if the consumer consents to use the service.  

The challenge, therefore, is balancing the drive to monetize big data with the privacy needs of the people who use the service. For example, people who identify with the "left" may not want Steve Bannon or Trump accessing their data. Similarly, people who identify with the "right" may not want Obama accessing their social media data. The end result is that no one can access meaningful data due to privacy restrictions - resulting in a standard so restrictive that it eliminates that ability of companies like Facebook to monetize the treasure trove of data that they have collected.

As noted in an earlier post, there is an inherent highlight the conflict between privacy and profiting from big data. The value of big data emerges from the secondary uses of big data. However, privacy policies require the user to consent to a specific use of data at the time they sign up for the service. This means future big data analytics are essentially limited by what uses the user agreed upon sign-up. However, corporations in their drive to maximize profits will ultimately make privacy policies so loose (i.e. to cover secondary uses) that the user essentially has to give up all their privacy in order to use the service.

There is a lot of potential in attempting to create an assurance service to address Facebook's predicament, but as they say, the devil is in the details. 

Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the way we do financial audits. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir, Deloitte's or anyone else.

Friday, November 3, 2017

Big Data Auditing Revisited: Context is King

It has been a few years since I wrote up on Big Data and the Audit.  It was one of the more popular posts with over a 1,000 hits to date.

The post looks at Big Data: A Revolution That Will Transform How We Live, Work, and Think by Kenneth Cukier and Viktor Mayer-Schönberger. I enjoyed the book as it really broke down the business impact of big data without getting in technical details of the underlying technology.

Why take a second look at big data auditing?

Big data and the accompanying analytical models are key a precursor to artificial intelligence. Machine learning algorithms that power the AI bots requires the users to analyse the problem and teach the underlying algorithm.

Part 1: Context is King

To make things a bit more digestible, I thought it would be good to divide the post into two parts. The first post is more palatable as I want to explore the second use case in a bit more detail and its relevance to today.  The second post will be a bit more controversial as I will take a look at the difficulty of applying fraud or cancer-fighting algorithms in the realm of (external) financial audit.

But let's look at the first issue: how can big data analytics give us better context? 

In the original post, I spoke discussed the use case used in Cukier and Mayer-Schönberger's work around Inrix. The book gives the example of how an investment firm is using traffic analysis, from Inrix, to determine the sales that a retailer will make and then buy or sell the stock of the retailer on that information. In a sense, the investment is using vehicular traffic as a proxy for sales. In an audit context, auditors can develop expectations of what sales should be based on the number of vehicles going around stores. For example, if sales are going up, but the number of vehicles are going down then the auditor would need to take a closer look.

What I realized from this example is that what big data can give auditors better context around things and assess reasonability of things. That is as more sensor data and other data are available to auditors to integrate into statistical models, the more they will be able to spot anomalies. 

One of the issues with Barry Minkow's ZZZBest accounting fraud was the lack of context. For more on the fraud, check this video:

I actually studied this case in my auditing class at the University of Waterloo. One of the lessons we were take away from this case was that the auditors didn't know how much a site restoration would cost on average (see the first bullet in this text on page 129). But how would an auditor be able to access such data? Even with the advent of the internet, it is not simply a matter of Googling for the information.

More recently, an accounting professor was found to have generated data fraudulently. The way he got caught was that a statistic he used didn't correspond to reality. Specifically:

"misrepresented the number of U.S.-based offices it had: not 150, as the paper maintained (and as a reader had noticed might be on the high side, triggering an inquiry from the journal)" [Emphasis added]

Again, the reader had the context to understand what was presented was unreasonable causing the study to unravel and exposing the academic fraud perpetrated by Hunton. 

What will it take to make this a reality? 

What's missing is a data aggregation tool that can connect to the private, third party, and public data feeds that an auditor can leverage for statistical analysis. Furthermore, for this to be useful to clients and the business community large are visualized depictions that enable the auditor to tell the story in a better way rather than handing over complex spreadsheets.

Of course for auditors to present such materials requires them to have deeper training in data wrangling, statistics and visualization tools and techniques. 

In the next post, we will revisit the first use case that I presented in the original post that explored how the New York City was better able to audit illegal conversions through the use of big data analytical techniques. Originally, I had thought this would be a good model to apply in the world of audit. However, I am revisiting this idea. 

Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the way we do financial audits. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir, Deloitte's or anyone else.

Sunday, October 15, 2017

What's missing from this Top 5 uses of Blockchain list?

TechRepublic's Tom Merritt  walks us through the "Top 5" uses of blockchain in the following video. The accompanying post lists the following 5 use cases:
  • Stocks
  • Shipping
  • Diamonds 
  • Livestock 
  • Law

What's missing? 

Stocks use case is actually limited to Initial Coin Offerings (ICOs). For more on an overview of ICOs, check this article. However, the post excluded Linq's blockchain that allows for the settlement of private securities.

But on a broader note, the post excluded the financial industry altogether in terms of being a forerunner for the use of blockchain. Following the hype-cycle, one of the early areas of interest for the use of the permissioned blockchain were financial institutions. It seemed like every week that a company joining the R3 Consortium.


However, since that initial fervor, a number of players, such as Goldman Sachs, Santander, Morgan Stanley and the National Australian Bank, have left the consortium.

Why?

The problem lies in understanding the actual business case for the permissioned blockchain (for the differences between public and private/permissioned, see this post). The permissioned blockchain helps parties to have a common view of transactions that they have transacted with each other via a shared ledger database. With the use of digital signatures, it incorporates authorization into this as well, so in addition to sharing information, it also enables the ability to "sign-off" on that information.

The banks could decide that they would use such a framework to make it easier to settle payments, however, how do they keep things private such as pricing and other data? This is something that needs to be sorted out but points to a bigger question as to what is the strategic advantage of blockchain for FIs. That is, this exponential technology doesn't lead to cost savings like robotic process automation or strategic insights like big data analysis.

And that's why I think something like shipping or supply chain more broadly is a much better beachhead for blockchain. With multiple partners involved in supply chain, have a shared database enables the partners to see where things are at between the wholesaler, shipper, and retailer, enabling each partner to get better insights into movement of goods and other business information. Such a system would allow for creative ways to settle payments or even enhance the ability of retailers to design consignment contracts with wholesalers. For example, BestBuy is marketplace (e.g. Brainydeal is one such retailer) within its retail front requiring such coordination. The one caveat, however, is to ensure that (cheaper) existing technology doesn't actually do this already. After all, shared databases are not a novel concept.

I would contend that legal would be a great place for the blockchain to expedite paperwork - more so than supply chain. However, such technology would be fought tooth and nail by lawyers. And they have unlimited resources to fight such technology in the courts. Also, politicians have little incentive to look into such advances as most of them are lawyers, depend on lawyers or have friends who are.

Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the way we do financial audits. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir, Deloitte's or anyone else.

Tuesday, October 3, 2017

Should drone inventors have thought about this risk?

Came across this article on Wall Street Journal about how the wedge-tailed eagles have turned out to be the drones worst nightmare. Here are some videos that illustrate the problem:



Being someone who works on innovation as the GRC Strategist - risk is something that I think about daily. Of course, you need need to be prudent and make sure that you've documented. All the known risks and have a plan and how to mitigate them.  For example, you should patch your software when the vendor tells you there is an issue.

But how could drone inventors possibly think about the risk formula about the impact and likelihood of eagles tearing up your drone?

It's a good illustration of how innovation requires taking risks of which you will only encounter when actually deploying innovation into the real world. They're just some things that literally will fall out of the sky that you didn't think of and a workaround will need to be designed after the fact.

Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the way we do financial audits. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir, Deloitte's or anyone else.

Monday, October 2, 2017

What can driving algorithms tell us about robo-auditors?

On a recent trip to the US, decided to opt for a vehicle with the sat-nav as I was going to need directions and wanted to save on the roaming charges. I normally rely on Google Maps for guiding me around traffic jams but thought that the sat-nav would be a good substitute.

Unfortunately, it took me on a wild goose chase more than once – to avoid the traffic. I had blindly followed the algorithm's suggestions assuming it would save me time. I ended up being stuck at traffic lights waiting to a left-turn for what seemed like forever.

Then I realized that I was missing was that feature in Google Maps that tells you how much time you will save by taking the path less traveled. If it only saves me a few minutes, I normally stick to the highway as there are no traffic lights and things may clear-up. Effectively, what Google does is that it gives a way to supervise it’s algorithmic decision-making process.


How does this help with understanding the future of robot auditors?

Algorithms, and AI robots more broadly, need to give sufficient data to judge whether the algorithm is driving in the right direction. Professional auditing standards currently require supervision of junior staff – but the analogy can be applied to AI-powered audit-bots. For example, let’s say there is an AI auditor assessing the effectiveness of access controls and it’s suggesting to not rely on the control. The supervisory data needs to give enough context to assess what the consequences of taking such a decision and the alternative. This could include:

  • Were controls relied on in previous years? This would give some context as to whether this recommendation is in-line with prior experience.
  • What are the results of other security controls? This would give an understanding whether this is actually an anomaly or part of the same pattern of an overall bad control environment.
  • How close is it between the reliance and non-reliance decision? Perhaps this is more relevant in the opposite situation where the system is saying to rely on controls when it has found weaknesses. However, either way the auditor should understand how close it is to make the opposite judgment.
  • What is the impact on substantive test procedures? If access controls are not relied on, the impact on substantive procedures needs to be understood.
  • What alternative procedures that can be relied on? Although in this scenario the algo is telling us the control is reliable, in a scenario where it would recommend not relying on such a control.

What UI does the auditor need to run algorithmic audit?

On a broader note, what is the user interface (UI) to capture this judgment and enable such supervision?

Visualization (e.g. the vehicle moving on the map), mobile technology, satellite navigation and other technologies are assembled to guide the driver. Similarly, auditors need a way to pull together the not just the data necessary to answer the questions above but also a way to understand what risks within the audit require greater attention. This will help the auditor understand where the audit resources need to be allocated from nature, extent and timing perspective.

We all feel a sense of panic when reading the latest study that predict the pending robot-apocalypse in the job market. The reality is that even driving algos need supervision and cannot wholly be trusted on their own. Consequently, when it comes to applying algorithms and AI to audits, it’s going to take some serious effort to define the map that enables such automation let alone building that automation itself.

Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the way we do financial audits. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir, Deloitte's or anyone else.

Saturday, September 30, 2017

CPAOne: AI, Analytics and Beyond

Attended the CPA One Conference almost two weeks ago in Ottawa, Ontario. Given that my space is in audit innovation, I attended the more techno-oriented presentations. Here's a summary of the sessions that I attended:

"Big data: Realizing benefits in the age of machine learning and artificial intelligence": The session was kicked off by Oracle's Maria Pollieri. The session delved deep in the detail of machine learning and would have been beneficial to those who were trying to wrap things around thing more from a technical side. She was followed up by Roger's Jane Skoblo. She mentioned a fact that really grabbed my attention: when a business can just increase its accessibility to data by 10%; it can result in up to $65 million increase in benefits.

The next day started with Pete's and Neeraj's session on audit automation, "Why nobody loves the audit". They want over a survey of auditors and clients on the key pain points of the external audit. It turns out that these challenges are actually shared by both. For example, clients lack context on "the why" things are being collected, while auditors found it difficult to work with clients who lacked such context. On the data side, clients have hard time gathering docs and data, while the auditors spent too much time gathering this information. From a solutions perspective, the presenters discussed how Auvenir puts a process around gathering the data and enables better communication. This will be explored in future posts when we look at process standardization as a key pre-requisite to getting AI into the audit. 

The keynote on this day was delivered by Deloitte Digital's Shawn Kanungo, "The 0 to 100 effect". The session was well-received as he discussed the different aspects of exponential change and its impact on the profession (which was discussed previously here). One of the key takeaways I had from his presentation was how a lot of innovation is recombining ideas that already exist. Check this video he posted that highlights some of the points from his talk:



Also, checked out the presentation by Kevin Kolliniatis from KPMG and Chris Dulny from PwC, "AI and the evolution of the audit". Chris did a good job breaking down AI and made it digestible for the crowd. Kevin highlighted Mindbridge.ai in his presentation noting the link that AI is key for identifying unusual patterns.


That being said, the continuing challenge is how do we get data out of the systems in manner that's reliable (e.g. it's the right data, for the right period, etc.) and is understood (e.g. we don't have to go back and forth with the client to understand what they sent).

Last but not least was "Future of finance in a digital world" with Grant Abrams and Tahanie Thabet from Deloitte. They broke down how digital technologies are reshaping the way the finance department. As I've expressed here, one of the keys is to appreciate the difference between AI and Robotic Process Automation (RPA). So I thought it was really beneficial that they actually showed how such automation can assist with moving data from invoices into the system (the demo was slightly different than the one that can be seen below, but illustrates the potential of RPA). They didn't get into a lot of detail on blockchain but mentioned it is relevant to the space (apparently they have someone in the group that specifically tackles these types of conversations).


Kudos to CPA Canada for tackling these leading-edge topics! Most of these sessions were well attended and people asked questions wanting to know more. It's through these types of open forums that CPAs can learn to embrace the change that we all know is coming.

Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the way we do financial audits. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir, Deloitte's or anyone else.

Monday, September 25, 2017

Will the iPhone's blue ocean strategy work?

Apple unveiled its much-anticipated iPhone upgrade - the iPhone X - earlier this month.

The following video is a splashy summary of what the phone offers:


The following video has Jony Ive's voice-over and gives a bit more about the actual technology behind everyone's favourite iDevice:



The most interesting feature for me was the augmented reality piece. With the success of Pokemon Go, the business opportunity is just waiting to be exploited. However, there seems to be more work that needs to be done for it is ready for mass consumption.

Perhaps, the following Funny or Die "review" of the release summarizes the sentiment out there:



But is it fair?

It's definitely not the wow of the first iPhone or iPad release. It feels incremental. However, Wall Street Journal has a different theory: Apple is targeting the Chinese "elite" who would want such a phone because of the status it affords:

"The iPhone X design has raised hopes that it can reverse Apple’s fortunes in China, Apple’s most important market outside the U.S., where sales have fallen six straight quarters.

“The high-end Chinese phone market is super competitive and customers are very discerning but also enthusiastic,” said Benedict Evans, a partner at Andreessen Horowitz, a venture-capital firm. “If Apple can get something that rings the bell [with them], then this will work.”"

This could be a blue ocean strategy at work (see the video below for more).

The idea of a blue ocean strategy is that instead of competing in the blood-soaked waters of intense competition companies migrate to the blue ocean where there is no competition or where the existing competition doesn't matter. 

Let's face it.

Either we're guilty of lining up for one of those iDevices - or know someone who did/does. But at the same time, there are no big line-ups for Microsoft or Samsung computing devices. This uniquely positions Apple is to capitalize on its brand - while others are left fighting in the red oceans on product features and price.




Author: Malik Datardina, CPA, CA, CISA. Malik works at Auvenir as a GRC Strategist that is working to transform the way we do financial audits. The opinions expressed here do not necessarily represent UWCISA, UW, Auvenir, Deloitte's or anyone else.