12 Oct 2022

White House releases Blueprint for an AI Bill of Rights

On Tuesday 4 October, the US White House Office of Science and Technology Policy (WHOSTP) published a Blueprint for an “AI Bill of Rights”, setting out five principles which should guide the design, use and deployment of Automated systems.

The bill of rights was trailed for the first time in October last year, when then Director and Deputy Director of WHOSTP published an article in Wired outlined what some of the most common concerns are when it comes to AI technologies – including the use of algorithms to make significant decisions about people’s lives, the use of biometric technologies to analyse human behaviour and characteristics and the extent to which our smart devices record, store and share our behaviour. To address these and other worries, WHOSTP set out on a major public engagement endeavour, which alongside engagement with a wide range of experts informed what should go into a “bill of rights for an AI-powered world.”

This exercise has now resulted in a Blueprint for an AI Bill of Rights, consisting of five principles, each with an explanation of why they’re important, what should be expected of automated systems with regards to this principle and examples of how this principle has been implemented in practice. The five principles are:

  • Safe and Effective Systems: You should be protected from unsafe or ineffective systems.
  • Algorithmic Discrimination Protections: You should not face discrimination by algorithms and systems should be used and designed in an equitable way. 
  • Data Privacy: You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.
  • Notice and Explanation: You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.
  • Human Alternatives, Consideration and Fallback: You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.

The Blueprint is put forward as a guide for anyone in society who can help protect people from any threats presented by AI. The examples of putting principles into practice therefore range from industry initiatives like risk assessments and auditing mechanisms, to guidelines from standards organisations and government departments, and regulatory action across sectors.

When it comes to defining the types of technologies that should be covered by the initiative, WHOSTP have opted for a broad approach, based on the argument that many of the potential harms highlighted can be caused by tools less complex than what would traditionally be categorised as AI. Systems in scope are therefore 1) automated systems that 2) have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services. They further provide a list of examples of automated systems that fall within scope in the Appendix (page 53-55 here), although are careful to highlight that the list is not exhaustive.

While the Blueprint is framed in a much broader way, it has interesting parallels with the UK Government’s publication Establishing a pro-innovation approach to regulating AI, published in July. Here, the focus is specifically on the role of regulation, but the document also provides a set of principles (with a significant overlap between these and those chosen by WHOSTP) which regulators should seek to enforce in cases where AI is considered to pose significant levels of risk. It is welcome to see governments across the world take seriously the need to protect the public from potential harms of technologies, and techUK applaud the principles-based approaches and the focus on significant risks as seen in both the Blueprint for an AI Bill of Rights and the UK Government’s policy paper.

The UK approach is still evolving, as the Government is planning to publish a white paper setting out more detail on how regulators should seek to implement the principles. This follows a call for views on the policy paper, which techUK submitted our response to last month. You can find our response here.   


Emilie Sundorph

Emilie Sundorph

Programme Manager, Digital Ethics and Artificial Intelligence, techUK

Emilie joined techUK in June 2021 as the Programme Manager for Digital Ethics & AI.

Prior to techUK, she worked as the Policy Manager at the education charity Teach First and as a Researcher at the Westminster think tank Reform. She is passionate about the potential of technology to change people's lives for the better, and working with the tech industry, the public sector and citizens to achieve this.

Emilie holds a master's degree in Philosophy and Public Policy from LSE. In her spare time she is currently trying to learn Persian and improve her table tennis skills.

[email protected]
+44 (0) 07523 481 331

Read lessmore


Related topics