NFU Blog
Beware; the robo-adviser is coming
With several large financial institutions, from Aviva to BlackRock, investing heavily in robo-advisers, these tools are gaining market-share and might in a few years present serious competition to their human counterparts. NFU has for a long time been calling for caution on the way these new tools are implemented. We will therefore in this article take a closer look at what constitutes a robo-adviser and why these programs may come with several risks that their promoters often gloss over, yet which could have serious implications for their users.
Have you ever had a financial adviser named Amelia, Nadia or Nina? Possibly a few of you reading this article will, but nowhere near the amount of you who will in the near future. The robo-adviser in financial services has come to stay, for better or worse. These virtual humans, notably enough all female and Caucasian, are at the forefront of a development that has been taking shape in the financial services industry for several years but now is picking up speed. We’ve written about the concept of robo-advisers before, but will this time focus on some of the reasons why they may not be the easy solution that many promoters would like them to be.
Robo-advisers can be defined as ‘digital platforms that provide automated, algorithm-driven financial planning services with little to no human supervision.’ In 2013, there was an estimated 51 robo-advisers managing $2bn in assets. In March 2017, these figures were 2148 advisers managing an estimated $140bn. By the end of 2017 the assets managed are expected to be in the $285bn range. That is an increase of assets managed of 14.150% over 4 years, and shows the strong appetite that banks and investment firms have for these new programs.
The companies claim a desire by customers to have cheaper and more accessible advisory services, and plan to use the programs to advice on everything from asset management, through mortgage loans, to customer complaint handling. This development will be further assisted by the implementation of the PSD II directive in 2018. The directive gives bank customers the possibility to use third-party providers to manage their finances, with banks having to hand over financial information if consent has been given by the customer. This will open up the market for a lot of new actors to help advice and manage people’s money which will make the incumbents fight even harder to keep their customers satisfied.
However, while there are certainly many great possibilities associated with robo-advisers, some less positive aspects should always be kept in mind when discussing implementation of these programs.
One point that is often advanced is that robo-advisers are less prone to bias than a human adviser would be. While it may be true that machines do not yet have feelings in the same way that humans do, recent studies into machine-learning show that programs will end up exhibiting exactly the same bias as their developers, since the programs learn based on datasets that are curated by the developers. Therefore, they are not an infallible solution either.
Another point often brought forward is that human advisers are prone to conflicts of interest between serving the customer, serving the bank and lining their own pockets. While it is true that there may be some conflicts of interest associated with advice in general, bringing in a robo-adviser to replace the human is not a guarantee for impartiality. As pointed out by Bloomberg earlier this year, robo-advisers can be just as prone to favouring certain products over others depending on how the financial institution decides to earn its money. Which leads us to a point that NFU has been advocating for many years, that only through changing the corporate culture and priorities in the financial sector, can problems associated with mis-selling be avoided. Sales-pressure and internal promotion of bad products can have just as damaging an effects for the customer whether it comes from a human or a robo-adviser. In fact, as we will touch upon later, it might have even more of an effect if it comes through a robot.
The human will in some other cases have a distinct benefit over the programs. While robo-advisers can easily and cheaply gather standardised information from customers, they are not particularly apt at obtaining information specific to each customer.
‘There remain subtle ways in which a good broker may elicit relevant information in the course of a conversation. What is the likelihood of a borrower starting a family in the next few years, etc.’
This information is ultimately very important for the proper management of a customer’s portfolio and not considering these factors could lead to bad investments and products being sold to some customers.
Furthermore, because advice given by programs is often streamlined to be as friction-less as possible, this can lead to some investors inadvertently not getting all the information they should receive due to them choosing to skip over them. While not a guarantee that human advisers would provide the information, they at least have more of a chance of sensing along the way if a customer has understood what is happening with his/her money. This lack of information and education can also lead to customers, used to just sending money to an app through their phones, swiftly becoming disconnected if equity markets tumble, leading to sharper selloffs as unprepared investors on automated platforms make hasty decisions. This not only hurts the customer but also adds more uncertainty into the financial system.
Finally, as pointed out by several researchers, robo-adviser methods have yet to be tested in terms of performance during a substantial market downturn and they can’t provide the human touch that can help guide investors through difficult situations when the market takes a dive and prevent them from selling their shares in a panic.
Interestingly enough, if one was to look at the FT300 list of registered investment advisers, it is the first list since 2014 that does not contain a robo-adviser. This seems an odd occurrence with the large increase in robo-advisers taking place in recent years, but might indicate that customers and large investors are still not sold on the idea of taking advice from a program, even after previous years of doing so.
One point that has so far not received much attention in this debate, but which could have quite important ramifications for customers, is the development of emotional intelligence in robo-advisers.
‘Emotional intelligence is the ability of individuals to understand their own and other’s feelings, distinguish between various emotions and label them correctly, use the emotional information to guide behaviour and thinking.’
This is usually part of what has set us humans apart from earlier programs, yet from a developer’s standpoint it makes sense to include this type of intelligence in their programs. However, this work could have significant application. The ability to empathize (or appear to empathize) is an important component of human communication. Various studies have shown that humans are much more likely to have a positive reaction to an empathic conversation. And that is the important part, that through these robo-advisers, financial institutions can now start to influence the reactions that customers have when receiving advise. Seen in its mildest form, there would as such not be a problem in making customers feel happy while engaging with them.
However, what happens the day that an institution decides to use the enhanced emotional intelligence of its system to emotionally manipulate its customers to buy more or expensive products, while having a ‘positive reaction’? What happens when the customer is not aware that they are speaking with a robo-adviser and so aren’t aware or doesn’t consider the added risk this may pose? And what happens when the robo-adviser is programmed to oversell a certain product even if it is not to the benefit of the customer. Who is then going to blow the whistle and call out the institution for bad business practices?
This technological development is still in its infancy, yet within a very short time these possibilities will become real with all its benefits and risks. That is why NFU continues to raise these issues with legislators and international counterparts and highlight that automated advice should complement, and not compete, with human advise. Because if the human advisers are pushed out and replaced by robo-advisers, customers will not get better advise, and no one will be left to fill the gaps when the programme breaks, is hacked, or customers just plainly want to speak with an actual human.
Morten Clausen, Policy Officer UNI Europa Finance
@mf_clausen