Open banking is revolutionising financial services and creating opportunities for people and businesses to understand their finances like never before. To date, open banking has focussed on accessing data from current and credit card accounts and analysing how people and businesses spend. However, in the future, it is highly likely that this data will be combined with savings, mortgages, pensions, insurance and other financial related data to give a full picture around financial protection or wealth management.
Machine learning techniques are making it easier to spot fraud and protect the customer. They are also being used to identify investment opportunities, reduction in business expenditure and smart investments for individuals. However, there is a potential for machine learning and other forms of artificial intelligence (AI) to be deployed by financial services in ways that many consider to be unfair or unethical.
For example, anti-discrimination laws generally prohibit discrimination against a person where the mistreatment is attributable to a protected characteristic such as race, gender or sexual orientation. The issue with using machine learning techniques in offering financial products and services to customers is that the technology may 'learn' to discriminate against specific classes of society whether on the basis of a protected characteristic or another data point which may be attributable to certain persons such as their postcode (which can lead to demographic profiling).
Pinsent Masons' research reveals that 56% of consumers want to know when their financial services providers are using AI for financial decisions that are made about them. Underlying this, a top cited concern for consumers is that the data used by the AI may be inaccurate or biased.
For financial institutions, this means that internal processes and controls should be put in place to avoid bias being introduced into decision making. Existing practices, such as sample checking, are key to ensuring that the outcomes remain appropriate as new technology is implemented too. In the UK, financial institutions are generally required by regulators to treat their customers fairly and act in their best interests. These general principles need to be thought through in this context.
To avoid unfairly treating customers in the use of access to data through open finance and reliance on machine learning technology, financial institutions should run impact assessments before using new technologies and ensure that the right people within their organisations are asking the right questions. Some of these questions have been put forward by the European Commission as part of its framework for trustworthy AI.
To give some examples, a business should ask, have we:
- addressed possible limitations which may be a result of the data sets on which our technology relies?
- tested that the data set provides for diversity and broad representation of customers?
- put in place processes to test and monitor for potential bias during the development and use phase of the technology?
These are just the tip of the iceberg. Open finance is highly likely to improve the lives of people and business by making them engage with their financial data more directly. As AI improves and assists in this process, businesses must ensure that they respond by putting in place effective frameworks to ensure that these technologies are used in a trustworthy way.