The EU provides guidelines to encourage the ethical development of AI
According to the EU, AI should abide by the fundamental ethical principles of respect for human autonomy, prevention of harm, justice and accountability. The guidelines contain seven requirements – listed below – and pay particular attention to protecting vulnerable groups, such as children and the disabled. They also state that citizens should have full control of their data.
The European Commission recommends using an assessment list when developing or distributing AI, but the guidelines are not intended to be ̵[ads1]1; or interfere – policy or regulation. Instead, they offer a loose framework. This summer, the Commission will work with stakeholders to identify areas where further guidance may be needed and find out how to best implement and verify their recommendations. At the beginning of 2020, the expert group will include feedback from the pilot phase. When we develop the potential to build things like autonomous weapons and false news-generating algorithms, there are likely to be more governments to consider the ethical issues that AI brings to the table.
A summary of the EU guidelines is below and you can read the entire PDF file here.
- Human Agency and Supervision: AI Systems shall enable fair communities by supporting human agencies and fundamental rights, and not reducing, limiting or misguiding human autonomy.
- Robustness and Security: Credible AI requires algorithms to be secure, reliable, and robust enough to handle errors or inconsistencies in all life cycle phases of AI systems.
- Privacy and Data Management: Citizens should have full control of their own data while data about them will not get used to or discriminate against them.
- Openness: The traceability of AI systems should be ensured.
- Diversity, Non-Discrimination and Justice: AI Systems Should Consider Who
- Social and Environmental Wellbeing: AI systems should be used to enhance positive social change and increase sustainability and ecological responsibility.
- Responsibility: Mechanisms should be put in place to ensure responsibility and responsibility for AI systems and their results.