Written by Alexandra Cutean, Senior Director of Research & Policy, ICTC
In anticipation of ICTC’s soon-to-be-released research study on AI and its impacts for the Canadian labour market, Alexandra Cutean (Senior Director, Research & Policy) sat down with Eli Fathi (President & CEO) and John Colthart (VP of Integrated Experience) at MindBridge Ai to learn about how this unique business is redefining what ”reasonable assurance” means for audit. A leader in the AI space, and one of Canada’s top Fintech companies, Eli and John give us a glimpse into technologically-driven changes for the audit and assurance industry, and labour augmentation considerations for audit professionals. Eli and John outline MindBridge’s pathway into this traditional sector, and highlight how Canada can capitalize on its strengths in AI to not just compete, but to lead in the global digital economy.
To start, tell me a bit about the journey of MindBridge Ai. Having previously worked in the accounting space myself for a little while, I’m interested to know what compelled you to start this company.
Eli: As you may know from your own experience, the financial world is changing rapidly and with that, is facing new and growing challenges. It’s inevitable that on daily basis, someone in the world is encountering financial losses due to human error or intentional fraud. In fact, the latest ACFE Report to the Nations estimates that annually, firms lose about 5% of their revenue to fraud – this translates to nearly $4 trillion per year, but it’s very possible that the number could be much higher. MindBridge was founded with the goal of leveraging the power of artificial intelligence and machine learning to restore trust in financial transactions and Ai Auditor was our first product to market.
Even at 5% that figure is extremely substantial. But, considering the largescale changes taking place in the finance world overall, what made you focus on audit and assurance specifically, instead of say banking or insurance, for example?
Eli: We wanted to tackle the audit and assurance space first because we knew we could harness AI to address the fundamental reasons related to why financial risk occurs. First, today’s accounting practices and tools are unable to sufficiently understand and analyze the enormous amounts of data that are locked into client systems. Second, many current practices involve identification techniques and mechanisms that are simply not able to cope with the increasing complexities of today’s client data.
And for us, the poof is in the pudding when it comes to addressing this need. To date, we’ve on-boarded over 300 customers around the world, and we work with firms of all sizes. We help financial services professionals detect risk earlier, and with a greater level of detail and insight into the data they work with, extract better value for clients.
The last point you mention is particularly interesting. Can you give me an example of how Ai Auditor helps businesses better manage financial data or evaluate risks?
Eli: In technical terms, we combine the use of AI with traditional rules-based algorithms and statistical methods, and provide 100% transaction analysis – this is a world first for audit. In simple terms, using the Ai Auditor means that within minutes, an auditor can review trends, anomalies, and risk-score transactions with a level of speed and efficiency that is otherwise unimaginable. This then enhances [the audit professional’s] judgement capabilities, and reduces risk, while empowering them to deliver better insights to clients.
On the client side, was or is there any initial apprehension from your customers to introduce and use product in their practices?
John: In short, yes. Evolving an industry that is thousands of years old, and beholden to regulations and practices that are entrenched in everyone’s consciousness, is an interesting challenge. Not only are we [MindBridge] questioning the expectations of a 40-year old Computer Assisted Audit Techniques and Tools industry, we’re also pushing the envelope when it comes to the interpretation of audit standards. To put this in perspective, we’re talking about using technologies that didn’t exist when many of those very standards were actually written.
This certainly sounds like you’re charting new territory. However, given these many interwoven parts and challenges, what was the main concern that your customers voiced when it came to uptake of the product?
John: By far, the biggest concern – across firms of all sizes – centered on understanding how they [customers] would work with the AI, grasping how the technology would fit into their everyday lives, and how it would impact their jobs. I want to make it clear that AI will not replace financial professionals. There’s simply no substitution for the experience and intuition that people bring to understanding the results of the analysis, nor can a machine replace the interactions and relationships built with clients. But what AI can do is augment certain parts of their jobs. It allows the auditor to focus on value-added insights and services, instead of spending so much time in the weeds, digging though data. Our goal is to help firms understand these benefits and work with them on their journey.
Interesting point that you note about the augmentation of certain aspects of their jobs. Our upcoming report on AI, focusing on its impact for the Canadian labour market zeroes in on this very topic; and actually, some of the key examples we note are changes related to the financial world, including the augmentation of roles like auditors, accountants, and financial analysts. Based on your experience, can you give me an example of how you think MindBridge augments the jobs of auditors? What does the platform help them do more effectively, or more efficiently?
John: Since it’s impossible to assert with certainty that an event, such as a financial misstatement, will or will not occur, auditors have operated under the philosophy of what’s called “reasonable assurance” for decades. With too much data, too little time, and limitations inherent in the internal controls of clients, auditors cannot provide absolute or guaranteed assurance on their findings – instead, they have to use their professional judgement to fill in the gaps. What this means is that both auditors and clients operate on the principle that audit evidence is more persuasive than conclusive, and are guided by the Generally Accepted Auditing Standards (GAAS) to provide reasonable assurance when it comes to financial statements being free of material misstatements.
What Ai Auditor does for these professionals is solve the problem of “too much data and too little time”. Because this initial data analysis work is offloaded to applications that operate at much higher speeds than people ever can, it allows time for the audit professional to take up different and higher-value tasks. At the same time, AI improves the testing of ledger data, by going beyond traditional sampling techniques to identify misstatements based on actual risk analysis. This gives the audit professional a higher quality of data to work with, which can not only help them deliver better insights, but potentially even to discover new business opportunities.
Tell me a little bit more about the principle of “reasonable assurance”. If this is what guides the industry to date, were first-time users [of Ai Auditor] worried that they didn’t have the skills necessary to work with the platform, or that they wouldn’t be able to understand the insights it generated?
John: Naturally there was a bit of apprehension, as with any new technology that threatens to change the way things have always been done. Understanding that every new technology presents unique adoption challenges, and that AI especially is a mostly misunderstood innovation, we’ve purposely designed our platform to be human-centric and explainable. This means that our users do not need to be experts in data science or programming to run or understand the analysis.
Eli: To add to that, of course there is some level of data analysis and interpretation required when you’re using AI to analyze and measure risk across 100% of financial transactions; but the upskilling trajectory for professionals already working with large amounts of data is not a steep one. On the point of job augmentation, if we look at auditors’ most popular tool today – Microsoft Excel – AI requires an understanding of not only what’s being reported, but also the underlying reasons for this reporting. This means some upskilling in the space of data analysis, but a good AI tool will not make this journey a difficult one. This is where the concept of explainability comes in.
I’m glad you brought up the topic of explainability, as it’s one spurring a lot of recent debate in the AI space. I note that descriptions of Ai Auditor state “no black box magic” several times. How is the concept of explainabilty baked in to this product?
Eli: Building trust in potentially disruptive technologies relies on users understanding the answers they’re given, and knowing how those answers are achieved. This is even more critical in an industry that is skeptical by nature and evaluates risk on a daily basis. Our philosophy in designing Ai Auditor was for it to be a “glass box”, not a black box. This means that when our AI provides insights or direction to users, we give them the feedback and explanations they need to see and understand it at every step of the process.
John: Exactly, and to hit this point home, I’ll give you an example. When identifying risky transactions, we list all of the criteria and underlying rationale behind the patterns to help auditors explain the results. When auditors are using plain English in our transaction search, we use Natural Language Processing to understand the requests and then also highlight the terms that allow the user to explore other possibilities in their investigations. This level of explainability helps our users not only trust the AI-based analysis, but – and this is very important – it helps them discuss the results with their clients.
So, taking these changes, enhancements, and new possibilities into account, how do you envision that AI will reshape the audit and assurance industry?
John: We envision that the industry will be drastically different than it is today; the firms of the future will be analytics and AI-enabled, creating new offerings and unlocking additional potential for clients.
Advisory services will have lower barriers to entry, as auditors leverage their evolving expertise to understand information faster and derive insights that are unique to the client. Audit will still be a function at these firms, but I can envision a future where the corporation itself will analyze 100% of the data in their ERP systems, and external audit will do a deeper review into other parts of the business like the CRM system for example, then tie that into the overall analysis.
Eli: Indeed, and our goal at MindBridge is to continue to lead the space by driving deeper audit data analytics, including more sources, and leveraging thousands of engagements to update algorithms and bridge more gaps needed to assess the overall financial health of an organization.
Looking broadly like this – whether at the overall financial health of an organization, an industry, or even an entire economy – you mentioned once that you are “riding the [AI] wave”. How do you think Canada can best “ride [this] wave” and capitalize on our strengths?
Eli: The AI wave is sweeping the world and Canada is already a strong player, but we have the challenge of positioning ourselves alongside larger markets that have access to more capital and a higher volume of skilled workers. Staying focused on our core strengths in research and business development is key. We have to avoid getting sidetracked by the latest hype and trends, and focus on bringing more products and services to market quicker.
Commercialization and market penetration are critical. Are you able to speak to any Canadian-specific attributes that helped MindBridge get to the place it’s at; or can help other budding Canadian AI companies grow and scale?
Eli: One thing that we have the benefit of in Canada is a government that is very friendly to emerging technologies, research and industry. With programs like the Strategic Innovation Fund and organizations like the Canadian Trade Commissioner Service, we have access to capital and trade assistance that reduces the friction of taking products to market. We also have amazing academic strengths, which is key to growing the talent that is needed to develop commercialized products.
John: Indeed, and initiatives like MILA, the Vector Institute, Amii and others are quickly developing, with the aim of cultivating the talent and skills needed to grow the Canadian AI ecosystem. Higher education programs across the country are beginning to understand our new data challenges, and many are working to address the development of curriculums that will fuel the next generation of talent.
Absolutely talent is a central consideration, and the development and attraction of a skilled workforce to address future needs is a topic with a lot of relevance for us [ICTC]. Although with technology developing as quickly as it is, I think many workers feel a sense of worry or even overwhelm when it comes to which pathways to take in preparation for this future. You said once that whether it be 5 years, 10 years, or 15 years, the Canadian economy will be largely digital. Is there anywhere, outside of the financial sector, that you can see AI having the biggest impact?
Eli: With AI ushering in the fourth industrial revolution, I don’t believe any one sector will specifically benefit or be impacted by it. Instead, I think the overall change will largely be in the nature of our jobs, themselves. Of course, there is much talk about the potential of AI to displace human workers, and while there is some truth to this, job displacement has occurred with many previous generations of industrial change. This notion is not new – it has existed long before the emergence of AI.
But whether accurate or not, the fear of AI-driven job displacement exists and that, in and of itself, I believe to be meaningful. Where do you think the biggest source of worry stems from?
Eli: Historically, there has always been a great fear of “technology unemployment” but that reality has almost never come to pass. Job creation actually exceeded job losses in all three previous industrial revolutions. What is different this time around with AI is that its benefits can be uneven. In fact, a recent study by Edelman found that more than half of the general public felt AI has the capacity to negatively impact those who are socioeconomically disadvantaged.
This last point – the possibility of its benefits being uneven – hits close to home. One of our [ICTC’s] core values is to try to ensure that technological developments unfold in a way that allows for greater economic participation for all Canadians, and inclusive benefit. Where do you think we, as a country, should focus to safeguard these interests?
Eli: I think in many ways, it boils down to leadership across several areas. If not positioned properly through evidence-based decisions, and forward-looking policy, AI has the potential to be divisive. So, whatever future AI brings, all leaders must prepare their organizations to manage this change because more and more, AI will not be optional or a “nice to have”; it will be the standard for doing business in a modern age. By embracing the technology now, stakeholders from industry to the government, will have an early leg up on the competitive differentiation and value that it offers. And if we work together to tackle the issue of labour augmentation and skill shifts today, we can create an inclusive and resilient workforce that understands the technology, utilizes it, and collectively benefits from it.