• Algorithms
  • AI
  • ADS
Operationalizing algorithmic explainability in the context of risk profiling done by robo financial advisory apps

Robo Advisors are financial advisory apps that profile users into risk classes before providing financial advice. This risk profiling of users is of functional importance and is legally mandatory. Irregularities at this primary step will lead to incorrect recommendations for the users. Further, lack of transparency and explanations for these automated decisions makes it tougher for users and regulators to understand the rationale behind the advice given by these apps, leading to a trust deficit. Regulators monitor this pro-filing but possess no independent toolkit to “demystify” the black box or adequately explain the decision-making process of the robo financial advisor.

Our paper proposes an approach towards developing a ‘RegTech tool’ that can explain the robo advisors decision making. We use machine learning models to reverse engineer the importance of features in the black-box algorithm used by the robo advisor for risk profiling and provide three levels of explanation. First, we find the importance of inputs used in the risk profiling algorithm. Second, we infer relationships between inputs and with the assigned risk classes. Third, we allow regulators to explain decisions for any given user profile, in order to ‘spot check’ a random data point. With these three explanation methods, we provide regulators, who lack the technical knowledge to understand algorithmic decisions, a method to understand it and ensure that the risk-profiling done by robo advisory applications comply with the regulations they are subjected to.