Unlocking trust in AI
BSI’s recent “Trust in AI” survey of more than 10,000 adults from nine countries (including Australia, China, the UK, and the US) sheds light on current attitudes toward artificial intelligence (AI).
2 February, 2024 - AI is no longer a concept from science fiction. It's becoming an everyday reality with the potential to transform our world and be a force for good across society. However, to truly flourish, it needs the confidence of the public.
BSI’s survey explores the opportunities AI offers to shape a better future and accelerate progress toward a sustainable world. Let’s explore a few findings from the survey:
How would you like to see AI shaping our future by 2030?
79 percent believe AI is going to help with carbon reductions; however, they are sceptical.
AI is poised to become an integral tool for companies to efficiently determine the carbon footprints of products by enabling the swift measurement and analysis of greenhouse gas emissions. However, the public needs to be convinced that this technology is truly effective in order to reap the environmental benefits.
BSI’s survey asked participants to assess the level of trust necessary for the utilization of AI in different crucial sectors. The results found that 74 percent indicated "trust was needed" for AI in medical diagnosis and treatment, and 75 percent expressed the same for AI in food manufacturing, which includes tasks like ordering and categorizing food based on use-by dates.
This was replicated across all industries, with between 72 percent and 79 percent of consumers agreeing trust is required for AI to be effectively used in everything from cybersecurity to construction and financial transactions.
38 percent say their job currently uses some form of AI daily.
Some countries are already further ahead with AI usage. China (70 percent) and India (64 percent) lead the way in daily usage of AI at work, while respondents in Australia (23 percent) and the UK (29 percent) use it the least. In the US, 37 percent say they currently use AI at work, while 46 percent do not—and about 17 percent are unsure. Among those globally who currently do not use AI at work, nearly half were unsure if their workplace will adopt it by 2030.
Uncertainty around the uptake of AI exists at the organizational level, especially within cybersecurity and digital risk policies. For some organizations AI may be relatively new, and so have decided to prohibit the use of such tools on corporate systems to mitigate perceived threats. In some cases, employees have found ways of bypassing these controls by using home computers to complete their work.
Providing training to build an understanding of how AI can be used at work is critical—something data suggests is not commonplace at present. Helpfully, 55 percent of respondents agree we should be training young people how to work in an AI-powered world.
61 percent want international guidelines to enable the safe use of AI.
Guidelines around the safe and responsible use are in demand to further address trust in and awareness of AI technology. International standards, such as the forthcoming AI governance standard (ISO 42001) and regulations play a role here.
Governments and regulatory agencies are also beginning to tackle the issue. For instance, President Biden recently issued the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, outlining the US government’s approach to "governing the development and use of AI safely and responsibly."
Similarly, the EU's Artificial Intelligence Act's goal is to provide a "legal framework that aims to significantly bolster regulations on the development and use of artificial intelligence." Other nations followed suit, as evidenced at the AI Safety Summit 2023.
The current lack of understanding and unsteady trust in AI presents a major obstacle - potentially limiting its adoption and benefits. To build confidence and trust in the technology, organizations can establish guardrails governing ethical AI use. (Read Ethical considerations of AI in healthcare by Shusma Balaji, Data Scientist, BSI.)
We've seen the pitfalls of prior tech revolutions that overlooked public trust, such as the dot-com boom and the rise of social media. As the AI era dawns, equipping people with the proper tools and knowledge is essential to realize AI's potential while avoiding past mistakes.
This article was originally published by Forbes under the title: How To Unlock Trust In AI: Key Insights From BSI's 'Trust In AI' Poll on December 15, 2023. Join Mark Brown and Digital Trust colleagues as they present at SASIG’s The Future of AI: Friend or Foe? on February 13, 2024.
Cyber Security Programme
The Cyber Security Programme provides a channel for our industry to engage with commercial and government partners to support growth in this vital sector, which underpins and enables all organisations. The programme brings together industry and government to overcome the joint challenges the sector faces and to pursue key opportunities to ensure the UK remains a leading cyber nation, including on issues such as the developing threat, bridging the skills gap and secure-by-design.
Join techUK's Cyber Security SME Forum
Our new group will keep techUK members updated on the latest news and views from across the Cyber security landscape. The group will also spotlight events and engagement opportunities for members to get involved in.
Upcoming Cyber Security events
Cyber Security updates
Sign-up to get the latest updates and opportunities from our Cyber Security programme.