This article was first published on Private Banker International website.


Advancements in artificial intelligence (AI) and machine learning have streamlined the compliance process utilizing algorithms designed to identify general risk categories.  While these advancements have undoubtedly closed the intelligence gap to some extent, technology still cannot be completely relied upon to substitute human intelligence and nuanced analysis.

Increasing access to compiled data sources and new technology has redefined the banking compliance landscape over the past few years. ‘Big Data’ and ‘Artificial Intelligence’ (AI) have become the new buzzwords in the modern lexicon, as each has become an integral part of the compliance model for warding off potential threats.  Developments in AI and machine learning has spawned an entire industry of data aggregators able to offer nearly instantaneous profiles of individuals and companies based on freely available information found online and through subscription-based services.  These innovations undoubtedly provide speedy and cost-effective solutions for identifying potential red-flag issues that uniformly concern the banking industry.  However, processes remain reliant on pre-set algorithms based on a limited number of variables that allow for a consistent scoring model.  This has led to an overreliance on technology and an increasing disassociation from human source intelligence (HUMINT) collection.

While high-level red-flags such as litigation, bankruptcy, political exposure (PEP), and derogatory media mentions can be automated and provide an effective first pass compliance screening, these searches do not always identify all potential compliance risks within a clean algorithmic model.  Furthermore, access to more information also does not necessarily yield more intelligence, as this gives rise to conflicting data or increasing false positive results.  Although an abundance of data may be considered by many a plus, the reality remains that technology is only as good as the value of the specific intelligence and connections one can derive from it.  As such, technology has not yet been created that can offer a ‘magic bullet’ single approach or methodology for assessing potential compliance risks that can be completely automated and applicable in all scenarios.  Technology can indeed provide reliable insight into specific sets of predetermined ‘check-the-box’ concerns.  However, current automated compliance models rely solely on available historical data and postulate that past actions will likely predict future actions.  While this certainly helps in screening obvious compliance risks, it does not add any value in identifying new threats that have not established a digital footprint of tell-tale risk behaviours.

More importantly, raw data alone cannot account for socio-political nuances nor for establishing a fundamental understanding of a person’s character or intent.  AI cannot account for human intuition nor does it provide perspective by taking into account external factors that can play a role in the decision-making process.  While AI algorithms can certainly identify credit risks based on debt-ratios or previously filed bankruptcies, these are all past events.  However, HUMINT better serves to establish current conditions, which can ultimately reveal unprecedented risks to a bank.  For instance, AI-based reviews of high net worth individuals normally include professional background checks, shareholdings, and media profile reviews.  However, only discreet human inquiries can solicit insight into an individual’s reputation and how they are actually perceived within their industries.  This issue is becoming increasingly important considering that there is a growing industry devoted to countering existing AI compliance algorithms and manipulate online derogatory mentions through Search Engine Optimization (SEO) tactics which basically push unwanted content deep into Google results pages.  This basically brings up the old adage of ‘hiding a dead body’ past the second page of Google!

There is no database available that can reliably inform an AI system on significant life events that will result in changes to an individual’s risk profile.  For instance, AI technology cannot observe if an individual is experiencing marital issues that may result in a divorce in the near future, subsequently impacting an individual’s financial status significantly.  AI systems can also not determine whether an individual appears to have developed a potentially hazardous vice, such as drug addiction or gambling.  AI can also not interpret social cues or fully grasp the intended meaning of a social media post, which may potentially draw unwanted negative attention that can ultimately reflect on a bank’s own reputation.

These are only a few examples of possible risk factors that can manifest themselves with no historical precedent and can only be assessed through human observation.  This is where HUMINT continues to maintain a significant advantage, as AI technology cannot effectively interpret cultural, social, or economic factors, nor does it understand the intricacies of human behaviour.

Although compliance technology does hold inherent promise, the fact remains that there are numerous obstacles that still need to be overcome in order to realize an unquestionably effective platform that could realistically eliminate a human interface and rely solely on AI.  What can be concluded based on the extent of modern technology is that no methodology can effectively be used in isolation and by itself.  Therefore, integrating technology with longstanding and proven HUMINT collection and analysis remains the most effective solution for assessing risk in the compliance model.