New analysis by FCA Economists working with academic collaborators from Georgetown University and Boston College shows that a simple automated ‘robo-advice’ tool significantly improved borrower repayment decisions in a randomised controlled trial. Many individuals report being willing to pay more for the tool than its monetary benefit, potentially suggesting a significant mental cost to consumers in juggling debts and making repayment decisions.
This article summarises work described in detail in Occasional Paper 61.
A timely role for robo-advice
Against the backdrop of a severe cost of living crisis, concerns about consumer indebtedness and financial distress are growing. More people will find themselves borrowing to make ends meet and then potentially struggling to manage these debts.
Most debt advice services – both free and paid-for – are designed to help consumers already in serious financial difficulty. But poor repayment decisions upstream of this – at an earlier stage in the debt lifecycle – may cause consumers serious difficulty later on. Is there a low cost, affordable way of helping consumers make better repayment decisions when time is on their side?
This is where automated ‘robo-advice’ comes in. If a consumer is able to set aside a fixed amount of money to repay debts each month and wants to minimise total interest and fees, there is a clear ‘right’ answer for the order in which they should repay debts. (This assumes the consumer doesn’t have a preference about which debts to pay off first. A robo-advice tool could be adapted to handle these cases.) So this could be an ideal situation for an algorithm to step in and help.
We designed and ran an experiment to test the potential for robo-advice to improve debt management decisions, with the aim of answering the following questions:
- How much does robo-advice improve consumers’ debt management decisions?
- Who accepts the tool’s advice and who benefits most? Who rejects the offer of help?
- Do consumers learn from the tool so that their subsequent, unaided decisions are improved? Does this depend on whether or not the tool is bundled with debt repayment tips so that borrowers learn general principles of debt management alongside seeing advice on optimal repayment?
A robo-advice experiment
We ran our robo-advice experiment online, with nearly 3,500 participants representative of the UK population.
Participants were given a sequence of nine hypothetical, but realistic, debt repayment scenarios involving between two and four debt accounts (for instance, mortgages, personal loans, and credit card debts) and APRs ranging between 0% and 292%. Below is an example of such a scenario.
Table 1: This month you have £500 set aside to pay off some of your debts. How will you split this payment across your debts to minimise interest and fees?
Balance | Interest rate | Minimum repayment | Fee for missed minimum payment | This month will pay off... | |
---|---|---|---|---|---|
Credit card | £260.00 | 39.9% APR | £6.50 | £12 | 0.00 |
Store card | £146.84 | 0% APR | £20.00 | £25 | 0.00 |
Credit card | £472.26 | 24.45% APR | £10.63 | £12 | 0.00 |
Amount still to allocate: £500 | Total: 0.00 |
Within each scenario we asked participants to allocate a fixed sum of money to make repayments against a given set of debts, with the aim of minimising total interest charges and fees. Scenarios were shown to each participant in a random order. This means that each completed a sequence of nine trials, where trial 1 comprised the first (randomly selected) scenario they encountered, trial 2 the second, and so on.
Individuals were randomly pre-assigned to one of five groups. In the control group, individuals made repayment decisions without access to robo-advice in any of the nine scenarios. Individuals in the other four ‘treatment’ groups were offered one of four robo-advice variants (treatments) but in the middle three trials only (trials 4-6).
The four variants were:
- free robo-advice
- free robo-advice with debt management tips (akin to providing a rationale/explanation for the robo debt advice)
- paid-for robo-advice
- paid-for robo-advice with debt management tips
Note: we can also think of the two treatments including `debt management tips’ as featuring some algorithmic explainability. In the paid-for robo-advice treatment, participants were asked to say what they would pay for the robo-advice (this was done in a way that incentivises truthful responses).
Offering robo-advice variants only in trials 4-6 ensured we would have a pre-intervention baseline (trials 1-3) to assess the quality of decisions made without robo-advice. It also gave us a post-intervention period (trials 7-9) to test for the potential persistence of any impact from repeated use of the robo-advice tool, such as a possible learning effect.
Regardless of the treatment group, the robo-advice always proposed the optimal repayment strategy and this property of the tool was made clear to participants.
Findings
We measured the quality of debt repayment decisions using ‘average percentage of savings forgone’. Imagine an optimal decision (one that minimises interest and fees) compared to the worst possible decision (that maximises costs). We can think of this difference as the total hypothetical savings available in a scenario – going from worst to best possible decision.
A person who makes an optimal decision captures all these savings and misses out on none – so their `average percentage of savings foregone’ is zero. Someone who makes the worst decision possible forgoes 100% of savings.
The average percentage savings forgone was 21.9% in our pre-intervention phase, equivalent to having an APR that is 3.55 percentage-points higher. Thus our experiment confirmed evidence from studies elsewhere that unaided debt repayment mistakes are common and may have sizable economic consequences for debtors’ wealth.
Subjects given free robo-advice improved their repayment decisions significantly relative to the control group. Among those who accepted help from the robo-advice tool, average savings forgone declined by 19.5 percentage points from 21.9% down to 2.4% – this is the ‘treatment on the treated’ effect.
This is shown on the figure below. Losses did not completely drop to 0% because 5.7% of treated subjects chose to override the robo-advisor’s recommendations.
Once we allow for the fact that consumers decline free robo-advice in roughly 25% of cases, the estimated effect falls (in absolute magnitude) to 14.6 percentage points. This is the so called ‘intention to treat’ effect. The latter is perhaps the effect estimate most relevant for policy given that non-take-up of advice has to be taken into account.
Chart
Data table
Effects disproportionally benefit subjects with low financial literacy and numeracy, suggesting that robo-advising can help to level the playing field in consumer debt-management.
In terms of willingness to pay (WTP) for robo-advice, we find that individuals’ WTP was, on average, higher than the monetary benefits they got from advice (estimated during the pre-intervention phase). This could be due to subjects being overly cautious to avoid any possibility of mistakes rather than eliminating the average mistakes they made when unassisted. It could also be due to a desire to avoid the cognitive and psychological costs of solving the repayment problems on their own.
From a policy perspective, we would hope that robo-demand is greatest among the less financially and numerically skilled, who make costlier mistakes in the pre-intervention phase. And, indeed, demand for robo-advice is inversely related to financial and numerical literacy.
Everything else equal, it is also inversely related to confidence in one’s skills and positively related to trust in robo-advice. Financially literate subjects had a lower WTP, while men and more trustful subjects were willing to pay more. Low trust in algorithms is also one of the strongest correlates of overriding robo-advice (which is never optimal in our setting), as is the desire to interact with a human advisor.
Interestingly, participants on average were not willing to pay more to receive education that explained what the robo-advisor was doing alongside robo-advice itself, suggesting that participants assigned no value in this setting at least to algorithmic explainability.
Finally, we ask whether robo-advice helps subjects learn strategies for optimal debt repayment. While we find better decisions are made post-intervention across all of the treatment groups, the largest difference is for the control group. This group had to work through more debt-management problems unaided before reaching the post-intervention phase.
We detect no learning by imitation or from the educational tips bundled with robo-advice. This suggests that, to be effective, robo-advice interventions need to be repeated each time consumers make choices.
The potentially very low cost of providing robo-advice, which in principle can be done through personal devices without the scale constraints of traditional advice, means that repeated interventions may well be feasible.
Who accepts the offer of robo-advice, who benefits most, and why?
This work is just one study but raises some points for regulators to consider.
Our trials show that a significant proportion of people struggle to make good debt repayment decisions even when all the information is available to them; and that a simple robo-advising tool may help improve decisions significantly.
Such a tool could offer an especially good form of consumer decision support in this context since (i) when it comes to managing debts, unlike risky investment choices say, there is a clear choice the robo-adviser can recommend to minimise costs, and (ii) robo-advice may be far cheaper to deliver than solutions involving human interaction. This is a particularly important consideration since those struggling to manage debts are unlikely to be able to pay for support and may need it on an ongoing basis, noting the absence of learning effects in our study.
Around 25% of consumers refuse the offer of free robo-advice. Many of them go on to make costly mistakes in their decisions.
Some consumers may be reluctant to take advantage of algorithmic help even when they are told it will clearly make them better off. This connects to an emerging literature on trust in algorithms, which explores whether and when humans display ‘algorithm aversion’. Addressing low trust in algorithms may be an important demand-side enabler in helping consumers harness technology to navigate complex environments.
A second reason why consumers might reject free robo-advice is concern about data privacy. This is much less likely to be true in our experimental setting - where the software already knows everything - but will be crucial in the real-world context of data sharing and open finance.
People seem not to value ‘explainable’ algorithms in this context. This insight connects to other emerging literature on explainable AI and may contribute to a richer understanding of where and how explainability matters most for consumer welfare.
Our findings may point to an important area for future research since simpler, more explainable algorithms may sometimes be less accurate in practice (implying some loss of welfare from less suitable decisions). However, it may be that people simply value explainable algorithms less when there is a mathematically optimal recommendation/decision, as in our debt management scenarios.
Conversely, they may value it more when this is subjective and based on prediction, such as rating someone's creditworthiness under uncertainty based on personal and situational factors. Whether inherent subjectivity/objectivity of the `answer’ matters for the value of explainable AI and how they might value any trade-offs are interesting questions for future research.
One shot robo-advice or ‘on demand’ support?
Consumers do not seem to learn much about optimal debt repayment from using robo-advice during our trials – even when they are given explanations for the advice alongside this.
This suggests that that one-off interventions and/or money management education may not be enough to improve long-term decision-making and that sustained improvements in consumer outcomes in this setting may be achieved only where an effective advice tool is available ‘on demand’.
Making robo-advice a reality
This raises the question of the conditions needed for this to emerge in today’s markets. A robo-advice tool for debt management would require access to enough data to be able to build a holistic picture of the debts someone holds with different providers.
Open banking already enables payment accounts data to be shared with trusted parties with explicit consumer consent. The robo-advice tool would rely on extending these sharing arrangements to the broader financial sector (ie open finance). This is a development that has the potential to transform the way consumers and businesses use financial services, as the FCA set out in its open finance feedback statement. Support beyond this may also help to foster the development of this innovation.
In the UK, the FCA through its Regulatory Sandbox and Innovation Pathways, can support firms in developing automated models for consumer decision support.
Finally, if individual lenders provided real-time robo-advice, as well as or in place of public or third-party providers, then we might expect conflicts of interest to arise. But because the debt management problems we study are inherently computational, with unique optimal solutions (contrary to risky and uncertain choices like investments), overseeing and assessing the algorithms for debt management advising should be easier relative to other applications.
All in all, given the high stakes for many vulnerable consumers, the potential need for ongoing decision support to navigate debts, and the fact that optimal advice depends on neither individual risk preferences nor beliefs, debt management may be a particularly promising domain for robo-advice.
Further work could be warranted to explore how best to support the development of a real-world robo-tool for borrowers.