NewsHighlight

June 10, 2021
Artificial Intelligence (AI) has been compared to electricity: it is a general-purpose technology with applications in all domains of human activity. Electricity has found uses that no one envisaged when the first electrical systems were designed and, in practice, life would be completely different without this technology. With AI also set to find its way into almost all fields of activity, the European Union wants to ensure this technology is human-centered, in the sense that it is governed by ethics and respects human-rights. To that end, the EU is developing AI regulation designed to mitigate the risks presented by such a powerful technology. Published over one year ago, the European Commission White Paper on Artificial Intelligence addressed several policy actions aimed at two important, but somewhat conflicting, objectives: developing AI in Europe (an ecosystem of excellence), and limiting the risks associated with improper uses of AI (an ecosystem of trust). The White Paper strikes a reasonable balance between the two competing objectives and, after a period of consultation, it was followed by a draft Artificial Intelligence Act (the Act), published less than two months ago. Ideally, the Act would have developed the two central ideas addressed by the White Paper: creating legislation that stimulates innovation, while at the same time guaranteeing trust. However, in its current form, the document has a few drawbacks and needs to mature to meet the expectations of the AI community, in particular, and of society, in general. Continue reading on Science|Business.

Related news

INESC Brussels Hub — Copyright © 2025| Designed and developed by MODAL BRANDS & DIGITAL