Ethics in Artificial Intelligence: When technology decides, rights matter.
Artificial Intelligence (AI) has moved beyond experimentation and pilot projects. It is now embedded in everyday systems that influence access to jobs, credit, education, healthcare and information. As AI becomes increasingly integrated into decision making and content creation, anyone seeking to make the most of the technology needs to consider how to ensure its adoption remains lawful and responsible. AI remains front page news and all individuals have increased expectations around understanding how AI will interact with them, their data, and their everyday lives.
With the legal frameworks in place already subject to challenge and change, taking the right approach to AI adoption is complex. Legal certainty will take time to develop and businesses wanting to make the most of AI now will need to ensure they are taking ethics and human rights issues into account as new technology and use cases develop.
International guidance shows strong alignment on what ethical AI requires. Frameworks developed by the OECD and UNESCO consistently emphasise respect for human rights, fairness, transparency, safety, and accountability as key constructs. Their function is to ensure that automated and semi-autonomous systems remain compatible with human dignity, democratic values, and the rule of law. They are principles which seek to ensure that the right balance is struck between the advantages provided by innovation and the impact on individuals.
Human rights form the foundation of this approach and can be used as a framework to identify whether the impact and intrusion of AI is proportionate. Freedom from discrimination, the right to privacy, freedom of thought, the right to work and the rights to culture, art and science are all very relevant considerations when adopting AI.
The risks of ignoring ethical principles are already widely reported. Biased models can exclude individuals or groups from opportunities. Lack of explainability prevents understanding and prejudices the ability to correct harmful outcomes. Fully automated decisions without meaningful human oversight increase the risk of discrimination. AI-generated content that is not labelled can mislead users around the legitimacy of views and opinions. These risks translate directly into regulatory exposure, liability, reputational harm, and loss of confidence among customers, employees and regulators.
For ethics to be operational, they must be embedded into governance and risk management processes. They must be considered at the outset of any proposed AI use case to ensure all relevant issues are in play when a business is making its decision. Several frameworks are designed precisely for this purpose:
-
The NIST AI Risk Management Framework provides a lifecycle-based approach to identifying and managing AI related risks.
-
The IEEE 7000 2021 Standard connects ethical values with engineering practice, requiring organisations to identify ethical concerns, translate them into system requirements and maintain traceability throughout design and deployment.
-
HUDERIA, developed by the Council of Europe, complements this approach by focusing explicitly on human rights, democracy and the rule of law, offering a structured methodology to assess societal and rights-based impacts of AI systems.
A key mechanism linking ethics, rights and liability is the use of impact assessments. Fundamental Rights Impact Assessments (FRIAs) play a critical role in identifying how AI systems may affect rights such as equality, privacy, and autonomy. These are mandated for high-risk AI use cases under the EU AI Act but can be a helpful process in assessing the impact of any new technology. In practice, organisations can leverage and extend GDPR Data Protection Impact Assessments (DPIAs) to cover broader fundamental rights risks, particularly where AI will have access to personal data. This combined approach strengthens transparency, supports meaningful human oversight and provides evidence of due diligence, which becomes essential under the EU AI Act and regulatory expectation.
Transparency and explainability are the common threads across all these frameworks. Informing individuals when AI is used, documenting systems, enabling human review and clearly identifying AI generated content are not procedural formalities. They are the mechanisms that allow rights to be exercised and responsibility to be enforced.
The question organisations must now confront is not whether they use AI, but whether they can explain, justify, and defend how it is used. When decisions are automated at scale, responsibility does not disappear. It concentrates.
KPMG Law’s specialist team advises on data, privacy, and human rights in the content of AI adoption. Contact James Cassidy, Director, Data Protection [email protected] or Johanna Suarez Borja, Manager, Data Protection [email protected]