AI systems are transforming society but also pose risks to
fundamental rights that require careful attention.
In the absence of appropriate procedures and safeguards, sensitive personal data may be illegitimately used for algorithm training or other forms of automated processing.
Autonomous AI decision-making without human oversight can compromise accountability and dignity. It is therefore essential to assess and mitigate these risks to ensure that AI advancements respect and protect core human rights while delivering their many benefits.
A balanced approach is essential for harnessing the potential of AI technologies in ways
that protect and promote the values that underpin a just society.
This study maps the legal and policy framework related to AI governance and reviews concrete use cases to examine the impact of AI technologies on fundamental rights.
This policy paper examines how the design and deployment of AI-enabled systems can challenge fundamental rights and sets out practical steps and recommendations for ensuring a balanced approach for risk mitigation without stifling AI innovation.
Full text in English (PDF, 958 KB)
Full text in Bulgarian (PDF, 954 KB)
Full text in Dutch (PDF, 3.66 MB)
Full text in English (PDF, 3.44 MB)
Full text in Bulgarian (PDF, 3.47 MB)
Full text in Dutch (PDF, 966 KB)