01 Nov 2023
by Or Lencher

The biggest risk of AI is that it doesn’t realise it's potential (Guest blog by Bright Data)

Guest blog from Or Lencher, CEO of Bright Data.

Large Language Models (LLMs) are like any other machine - their outputs are only as good as their inputs. In this case, that’s data. At Bright Initiative by Bright Data, the #1 web data platform, we work with not-for-profits, universities, NGOs, and public bodies to help them harness the insights contained in public web data.

Access to data is already coming under threat, as some platforms where massive amounts of that data exist are taking steps to limit whether others can access it.The dialogue around AI is dominated by talk of ensuring safety and managing risks, but the biggest risk of all is that AI cannot reach its potential when innovation is choked by a lack of access to data.

AI is already aiding scientific research and breakthroughs in health, it’s making businesses more efficient and supporting international trade, and it’s in our homes and cars. But if access to data is limited, then so is future development and the sector evolves on a proprietary basis. Innovation becomes influenced by the strategies of market incumbents who hold the data and the bias inherent in limited datasets, rather than by the best ideas, market forces, or social need. That should be something governments across the world are worried about.

Back in June, the All-Party Parliamentary Group on Data Analytics published a report on an ethical AI future, which made the case for global cooperation on regulation. The report also calls for a National AI Centre to bring together existing domestic regulators to ensure coordination across the many aspects of daily life that AI is braced to reach. This gets to the heart of the issue with AI. It isn’t something that will hit one country, sector, or aspect of our lives and touch nothing else. With this kind of broad impact, the light-touch regulation suggested in the government’s AI White Paper just won’t cut it. The pro-innovation instinct is the right one with emerging technology, but rather than little regulation, it requires good regulation. Good regulation would focus on the inputs – primarily data – and on safeguards around issues like bias, while encouraging a level playing field for innovation. Regulation, done right, creates the stability that businesses need to invest, safe in the knowledge that the goalposts won’t move and the vital inputs they need - data - will continue to be accessible. Combined with a strong industrial strategy on AI, regulation can ensure a competitive marketplace for ideas that are both economically viable and respond to social needs.

The challenge for the UK is to move quickly and effectively, as other nations are jostling for a position as the global leader. The AI Safety Summit could represent a step forward in that quest for the UK, but it’s not a clear frontrunner. Firstly, many of the key tech innovations are happening in the US, and many of the big players for the wider adoption of AI are also headquartered there. Secondly, while the UK has made legislative progress with its Online Safety Bill - which is awaiting Royal Assent and will empower Ofcom while putting much more of the responsibility for protection on tech firms - as well as laying the groundwork with the AI White Paper, other countries have made progress too.

In the US, President Biden has convened tech leaders to discuss managing the risks from AI, including pushing them to label AI-generated content and tackle bias, while protecting privacy and shielding children from harm. As an international business with the mission of keeping public web data public, we're active in the discussion of the future of AI and how public web data will shape it. AI will play a huge role in the advancement of humanity - it's already changing how we work and how we live, but the pace and depth of the progress from here are defined by its fuel and how it is regulated. 

To position itself as a global leader on AI the UK needs to tackle the biggest risk facing the field of artificial intelligence, the threat to accessing public web data. If we get that right, the opportunities for AI to benefit people and society are endless.


For more on AI, including upcoming events, please visit our AI Safety Summit Hub.
 
Digital Ethics image 4.png

How do we ensure the responsible and safe use of powerful new technologies?

Join techUK for our 7th annual Digital Ethics Summit on 6 December. Given the ongoing concerns about the impact of emerging tech, the Digital Ethics Summit will explore AI regulation, preparing people for the future of work, the potential impact of misinformation and deepfakes on elections, and the ethical implications of tech on the climate emergency.

Book your free ticket

 

 

Authors

Or Lencher

CEO, Bright Data

Bright Data is the industry-leading public web data platform. Fortune 500 companies, academic institutions,  non-profits and small businesses rely on Bright Data’s solutions to retrieve and analyze public web data in the  most efficient, reliable, and flexible way so they can make better and faster business-critical decisions. The Bright Initiative by Bright Data partners with non-profits, researchers, universities, NGOs and public bodies  to award pro-bono access to Bright Data’s leading technology and expertise to drive positive change throughout  the world.