The vote on a law to regulate the use of artificial intelligence in the EU this week in Brussels is a welcome step. The world is still in the early stages of understanding the impact of AI on businesses and society. On the one hand, it is hailed as something that could solve world hunger, cure disease and change the world. On the other hand, it is a destructive force that could lead to the extinction of humanity through disinformation, disempowerment and weaponisation. What is clear is that it is a powerful tool.
This law aims to ensure IT security, transparency, non-discrimination and traceability of AI so that it is not exploited by third parties or used for malicious purposes. The great thing about the EU’s AI law is that it proposes to assign identities to AI models. These digital identities, similar to human passports, will allow for unmistakable identification. In addition, they will be subject to a conformity assessment in order to register them in the EU database. This advanced approach will improve AI governance, protect individuals and help maintain control over AI. Companies that use and innovate with AI need to assess whether their AI falls under the risk categories proposed in the AI Act. They should also comply with assessments and registration to ensure and maintain the safety and trust of AI in society.
Author: Kevin Bocek, VP Ecosystem & Community at Venafi