Comparing your Digital Ethics ratings to your competitors - Part I

The tech industry approaches AI Governance in the same manner as it previously approached issues like sustainability- that is with a Corporate Social Responsibility (CSR) mindset: Principles are communicated, single initiatives are commissioned, and intentions marketed. However, this falls short of where it needs to be- that is with a Economic, Social & Governance (ESG) approach. A more sustained, strategic and substantive mindset which directly connects on-the-ground activities to corporate strategy and investor relations.

It is through an ESG lens that the finance sector evaluates the sustainability of organisations, or their handling of other external pressures such as the fight to end racism. Unlike CSR, ESG can be measured and rated. ESG ratings place pressure on organisations to perform and improve, as they are being measured not just against their peers but their past performance also. The question that is being evaluated is the breadth and depth of governance that is being developed, and thereby the likelihood that it will help the organisation avoid expensive mistakes – those which cause lasting reputational harm, increase the cost of capital, and contribute to the ‘techlash’.

 

Confusing Ethics with Risk

In thinking of AI Governance through an ESG lens, first it is critical to ensure the full set of issues are mapped out. In the same way that focussing on carbon neutrality might still expose an organisation to criticism for being unsustainable or pollutive, so too will a focus solely on risk or safety management leave the organisation exposed on the topic of ethics that directly leads to the ‘techlash’.

Trust in the technology industry has diminished in recent years, but this is not down to an absence of regulatory compliance or a failure of engineering quality. To be sure, many firms have suffered data breaches or have contravened data privacy rules such as GDPR. But consumer trust is lost for other reasons, such as when subconsciously asking the question of a company or product, such as whether it has your best interests at heart or to put it another way, to what extent does it “cross the creepy-line”?

This isn’t to diminish issues such as bias or discrimination that arise from the use of data. But there is already a growing corpus of best-practice on how to avoid or mitigate such issues, and the first-line-of-defence for this is to ensure engineering standards are tight. Because the tech industry is dominated by those from engineering backgrounds, it’s no surprise that this is often the domain of governance where technologists feel most comfortable. But only in taking a maximalist view of ethics, can an organisation avoid contributing to the techlash.

The European Commission’s HLEG-AI has called for AI to be robust, legal and ethical. Per our paper from the start of 2020, this means that three separate domains of governance need to be instigated. The activity of managing ethics, if understood to be managing the best interests of your stakeholders, needs therefore to be designed in such a way that your stakeholder’s interests can be assessed and your performance against this goal continuously re-evaluated. To get started, first – you need to instigate a structure to coordinate ethical activity under.

For part two of this insight, please click here

 

Charles Radclyffe, AI Ethics, Technology Governance and ESG Specialist