AI Regulations – Staying Ahead of the Curve with the AI Act

Cole PrudenArtificial Intelligence, Compliance, Data Privacy Regulations, Data Protection, Personally Identifiable Information, PII

AI Regulations – Staying Ahead of the Curve with the AI Act Article by PII Tools

AI is advancing faster than anyone can hope to maintain. And yet, many countries around the globe are working tirelessly to stay ahead of the trend, while others, admittedly, are falling behind.

Blink… And You’ll Miss It

No matter how many times we watch films like I, Robot and Ex Machina, it seems we just can’t help but push the boundaries of Artificial Intelligence further and further. Of course, AI isn’t all bad. In fact, it has countless beneficial uses in all fields, including everything from medical and space exploration to something as simple as holiday shopping.

But one thing that truly sets AI apart from other inventions and technological booms, is just how rapidly it’s advancing. Only a few years ago, models like ChatGPT and DALL-E were in their infancy. And today, you can instantly generate a well-organized and sourced legal document or an artistic masterpiece in less time than it would take to finish reading this sentence.

Despite its main advantages, AI also presents a plethora of regulatory headaches, security issues, copyright doubts, and sensitive information breaches. So, what government agency is currently leading the AI-regulatory pack, and is there even any hope of “putting the cat back in the bag”?

The AI Act

In many cases, the European Union has shown to be the forerunning in protecting people’s sensitive and private information online. For instance, the GDPR has served as the basic blueprint of data protection regulation for just about every other government body on earth.

And now we’re already seeing something similar from the EU for controlling AI and its varied use cases. The AI Act is set to go into force on August 1, 2025, and it presents the most comprehensive governance of its kind to date.

In its own words, the AI Act “aims to foster responsible artificial intelligence development and deployment in the EU”. But how exactly does it plan on accomplishing such a wide-scoped task?

Four Risk Levels

Because “Artificial Intelligence” can take so many forms, it’s important to identify its various uses and their risk levels. The AI Act’s risk framework looks like this:

  1. Minimal Risk: Most AI systems such as spam filters and AI-enabled video games face no obligation under the AI Act, but companies can voluntarily adopt additional codes of conduct.
  2. Specific Transparency Risk: Systems like chatbots must clearly inform users that they are interacting with a machine, while certain AI-generated content must be labelled as such.
  3. High Risk: High-risk AI systems such as AI-based medical software or AI systems used for recruitment must comply with strict requirements, including risk-mitigation systems, high-quality data sets, clear user information, human oversight, etc.
  4. Unacceptable Risk: For example, AI systems that allow “social scoring” by governments or companies are considered a clear threat to people’s fundamental rights and are therefore banned.

By defining an AI tool’s risk level, the AI Act can then regulate or outright ban certain models. This risk framework also paints a clear picture to AI software developers and companies of the potential security and human rights risks they need to avoid to achieve regulatory compliance.

Code of Practice

The AI Act, however, needs more than a framework and basic goals to ensure user safety. That’s why it’s scheduled to launch only a few months before its Code of Practice for providers of general-purpose Artificial Intelligence models (GPAI).

This Code will address critical areas such as transparency, copyright-related rules, and risk management, while also being open to feedback directly from GPAIs. These providers with operations in the EU will be able to submit their views and findings after having applied the AI Act’s provisions for a given amount of time.

The Code of Practice is set to go into effect six months after the AI Act, after being fully finalized in April 2025. The Code will be overseen by the Commission as well as a dedicated AI Office, tasked with supervising the implementation and enforcement of the AI Act’s rules on GPAIs.

A Safer AI World For All

That’s quite the layout of protective measures and safeguards, which only further propagates the EU as the aspiring global leader in safe AI.

In its own words, the EU is deploying the AI Act to “develop an AI ecosystem that benefits everyone… providing better healthcare, safer and cleaner transport, and improved public services for citizens.”

It’s obvious that the EU, the Commission, and the AI Office are fully aware of AI’s potential strengths and shortcomings. For instance, it knows even government bodies can benefit from AI, allowing them to offer cheaper and more sustainable transport, energy, and waste management services.

But there are still countless and unforeseen dangers that go hand in hand with something as quickly advancing as artificial intelligence. It’ll be worth keeping an eye on the EU’s AI Act and seeing just how other countries choose to follow suit or implement an amalgamation of their own.

Either way, thanks to proper AI regulation, we can all go to sleep at night a little easier. Even after having just binged Bladerunner and Her back to back.

CTV: Get Ready for the AI Act with PII Tools’ State-of-the-Art Data Discovery and Remediation Software!