The AI Act and Your Business

Cole PrudenArtificial Intelligence, Compliance

The AI Act and Your Business, an article by PII Tools

The two-year provisional period is nearly over, and soon the EU AI Act will go into full force. Let’s go through everything you need to know to prepare your business while still getting the most out of AI.

EU Act Requirements Blog Post by PII Tools

What Is the AI Act?

With examples as thorough as the GDPR, it’s on-brand for the European Union to be the first government entity in the world to release a regulation on how businesses and organizations can safely use AI.

The AI Act is “a European regulation on artificial intelligence (AI)”, most recognizable by its four major risk classifications. It provides a framework that everything from small start-ups to giant corporations can use to “foster responsible artificial intelligence development and deployment in the EU”.

And with the AI Act’s two-year provisional period set to conclude on 2 August 2026, let’s get right into the first steps leading to compliance.

(Or if you’re looking for a more detailed breakdown of the AI Act, you’ll find it here.)

EU AI Act Compliance Checker

The first step you can take right now, before the AI Act even goes into full force, is to fill out the EU AI Act Compliance Checker for each individual AI system used in your organisation.

The reason for using this interactive tool is “to determine whether or not your AI system will be subject to the EU AI Act’s new obligations on entities located within the EU and elsewhere”. And it does so by asking you a few questions about your business and AI system.

The form is meant to be filled out by everyone, from providers and deployers to product manufacturers and distributors. Then, depending on your position, you’ll answer further questions about your AI system and its safeguards. The final result will tell you if the AI Act’s new requirements will apply to your business.

And let’s say it does. What now?

Prohibited AI Practices

After filling out the Compliance Checker, you may discover that some of the practices deployed in your company’s AI system will soon be banned under the AI Act. This would be a pretty severe result and likely wouldn’t come as a surprise, given many of the outlawed AI practices would be considered pretty shady or illegal even without the AI Act.

A few examples of prohibited AI practices include:

  • Manipulative or deceptive techniques that distort behavior.
  • Exploitation of vulnerabilities based on age, disability, or socio-economic status.
  • Social scoring systems evaluating individuals based on behavior or personal traits.
  • Predictive policing relying solely on profiling.
  • Untargeted scraping of facial images for biometric categorization, and more.

EU Act Legal Requirements by PII Tools

Employees and the AI Act

Although we often think of most AI models as having a mind of their own, the AI Act is also sure to implement guidelines on how we – humans – interact with and use them at work.

Under the AI Act, all organizations must ensure that employees involved with AI systems possess adequate competence in three main categories:

  1. Legal: Understanding rights and obligations under the AI Act.

  2. Technical: The proper use and operation of AI systems.

  3. Ethical: Awareness of social and ethical implications of AI.​

EU Act Technical Requirements Article by PII Tools

In other words, a company-wide training on the AI Act and what changes your business will make to ensure compliance. Employees should also be provided with tailored training programs on all the new internal policies and standards they will be expected to uphold.

EU Act Ethical Requirements Blog Post by PII Tools

Transparency Obligations

Then there’s the user end of things. Chapter IV of the EU AI Act is dedicated to providers and deployers of certain AI systems and their transparency obligations.

This relatively long section can be broken down into three helpful summarizations. For AI systems interacting with individuals, organizations are required to:

  • Inform users when they are interacting with an AI system, unless self-evident.

  • Mark outputs from AI systems that generate synthetic content (e.g., deepfakes) in a machine-readable format.

  • Disclose the operation of emotion recognition or biometric categorization systems to affected individuals.

These transparency obligations aim to promote user trust in AI technologies.

Your Business and Safe AI

Anyone following the news is fully aware of the countless horror stories and various pitfalls inherent in AI usage. They range from lawyers quoting cases in court proceedings that never actually existed, despite ChatGPT telling the opposite, to Zillow having to fire 2000 employees and losing as much as $304 million due to its AI tool “Zestimate” purchasing homes above market value.

However, AI models can also threaten company data, something the EU’s AI Act is set to tackle, attempting to eliminate as many potential issues as possible. It’s also making waves around the world as other countries follow its example by passing their own bills creating legal frameworks for artificial intelligence.

The AI Act will go into force in 2026, meaning it’s time to ensure your business is ready for the new obligations as soon as possible. Not only will you avoid potential headaches and even fines, but you’ll be creating a secure space where employees, users, and AI systems can all work together safely.

Prepare for the AI Act by Discovering & Remediating All At-Risk Data with PII Tools!