By:

Bastiaan Kolster

They Ask, You Answer Coach

Reading time: +/- 9 min

December 24, 2024

Share this item:

The AI Act 2025: What does this law mean for your business and how do you prepare?

A white robot views a tablet against a background of blue computer code. The robot has a smooth, futuristic design with a round head and mechanical arms.

As an entrepreneur or marketer who uses AI tools like ChatGPT and Midjourney, you may have already heard that the AI Act is coming. With all the different deadlines, risk levels and compliance requirements, you may not be able to see the forest for the trees. You're wondering what this legislation means for your daily practice and what you need to do now to prepare your business.

As a digital sales and marketing coach/trainer at Buzzlytics, I help companies and marketing teams with the responsible deployment of AI on a daily basis. From my practical experience with our AI workshop and our TAYA Mastery program, I see exactly what organizations run into when deploying AI responsibly.

In this article, I make the AI Act understandable and practical. After reading, you'll know exactly what the AI Act is, which tools fall into which risk category, and what steps you need to take when to become compliant. That way, you can apply AI without worrying about fines or reputational damage.

Please note that this article is intended to inform you about the AI Act, but we are not lawyers. For advice for your specific situation, we recommend that you contact a lawyer with expertise in this area.

The higher the risk to people, the stricter the rules.

What is the AI Act and why is it important?

The AI Act is the first comprehensive European legislation specifically aimed at artificial intelligence. Its purpose is to ensure that AI systems are safe and reliable, without stifling innovation. The Act uses different levels of risk - the higher the risk to humans, the stricter the rules.

The core of the law revolves around risk assessment, protection and trust. By classifying all AI applications into risk categories, the law ensures that AI does not cause harm or violate human rights. With clear rules, companies know where they stand and users can trust that AI is working fairly and safely.

Specifically, what does the AI Act mean for businesses and how can you prepare?

Very specifically, in the short term, the law means that by February 2025, the use of AI tools with an unacceptable level of risk will be banned.

These are, among other things, AI tools that:

  • manipulate people without realizing it
  • exploiting vulnerable groups
  • apply social scoring
  • Use facial recognition in public places (with exceptions for security)

Here is a complete list of practices banned as of Feb. 2, 2025.

The chances are very small that the average company will have to act differently because of this. However, it is a good idea to start documenting what tools you are using and what the risk level of these is.

As of February 2025, the use of AI tools with unacceptably high risk is prohibited.

Which AI applications are covered by the new law?

The law identifies four levels of risk: unacceptable, high, limited and minimal.

Below is an explanation of the different levels and when the legislation takes effect for each level of risk.

Unacceptable risk (banned from February 2025)

As of February 2025, the AI Act prohibits technology that unknowingly manipulates people or exploits vulnerable groups. For the average business owner, as mentioned above, this means little, as these AI systems are hardly ever found in normal business operations. However, it does make sense to check your marketing tools for hidden influence techniques.

High risk (under strict conditions from February 2026)

Strict rules for high-risk AI will follow a year later, in February 2026. This does affect many companies.

AI tools with this level of risk can harm the rights, health or safety of citizens. This is the case when using generative AI to make decisions about people.

For example, are you using AI for recruitment? If so, a human should always be watching decisions. Furthermore, for popular tools like ChatGPT, you need to keep track of how you use them and train your employees in responsible use. This also includes systems that assess creditworthiness - you need to be able to explain how decisions are made.

If you use AI tools for recruitment, a human will have to look in on your decisions starting in February 2026.

Limited risk (transparency required as of August 2025)

As of August 2025, new rules apply to limited-risk AI, such as chatbots and deepfakes. The impact of this is practical: you have to let users know immediately that they are dealing with AI. For chatbots, for example, this means that you must notify users at the beginning of the conversation that they are interacting with a chatbot. And for deepfakes, you need to clearly indicate that the content was created by AI.

Minimal risk (no additional rules)

For basic AI applications such as spam filters and spell checkers, nothing will change. Still, it's wise to keep track of what systems you use. This will help you stay compliant if the rules change or if you start using more advanced AI tools later.

Deadlines will follow in 2027 and 2030 for specific situations, such as AI in products and government systems. For most companies, the 2025 and 2026 deadlines are most relevant.

Start preparing for these changes in a timely manner. Focus on transparency and documentation first, and then work toward the more comprehensive requirements for 2026.

Are you using chatbots? Then as of August 2025, you must notify users that they are talking to a chatbot.

Practical implementation: this is how to become AI Act-proof

So how do you prepare for the AI Act? Below is a concrete roadmap.

Step 1: Inventory (Q4 2024)

Map all the AI systems you use. Look at your own tools, vendor software and systems used by external partners on your behalf. For each system, document the name, vendor, purpose of use and, where appropriate, the department/external partner using the tool.

Step 2: Risk classification (Q4 2024)

Determine the risk level for each system. Note that one system may have different ratings depending on its use.

Don't know where to start when qualifying your AI tools? Then follow the following roadmap:

1. Contact your supplier

The easiest first step is to contact your AI vendors. They are required to know the risk level of their tools and can tell you which category your use falls into.

2. Use the official checklist

The second option is by using the online compliance checker. This tool asks targeted questions about your AI system and determines the risk level based on that information.

Through this tool, you can also find additional information about the specific regulations that apply to you. This is because there is a difference between developers of AI tools, companies that implement AI tools in their own software, and companies that only use off-the-shelf software.

Map all the AI systems you use.

3. Apply these rules of thumb

The third way is to assess the system itself against these criteria.

Unacceptable risk:

As your AI system:

  • unconsciously influences people
  • exploiting vulnerable groups
  • applying social scoring
  • facial recognition used in public places

High risk:

As your AI system:

  • Makes important decisions about people (such as in recruitment)
  • Used for education or vocational training
  • credit rating

Limited risk:

As your AI system:

  • Communicates directly with people (such as chatbots)
  • Generate content (such as deepfakes)
  • analyzes emotions

Minimal risk:

As your AI system:

  • performs basic tasks (such as spam filters)
  • Does support work (such as spell check)
  • makes general recommendations

Add the risk qualification to the documentation and update it periodically so that your list is always up-to-date. If you are using AI tools with unacceptable risk, stop using them immediately.

Document your measures so you can easily demonstrate them in an audit.

Step 3: Compliance 2025 (Q4 2024)

Prepare for the first deadline. Stop systems that fall under "unacceptable risk" and implement transparency requirements for AI with limited risk.

Step 4: Quick wins (Q2-3 2025)

Start with simple changes. Make sure chatbots identify themselves as AI and check marketing tools for prohibited influencing techniques. In the summary, document the measures you have taken.

Step 5: Preparation 2026 (Q3-Q4 2025)

If you use high-risk systems, such as ChatGPT or recruitment tools, you must meet specific requirements.

Here are the most important preparations:

Technical and organizational measures

  • Always use AI systems according to the supplier's instructions
  • Provide appropriate technical security
  • Establish clear procedures for use

Human supervision

  • Designate qualified staff for supervision
  • Make sure these people have the proper training and authority
  • Provide adequate support to supervisors

Data quality and monitoring

  • Check that input data is relevant and representative
  • Continuously monitor how the AI system is performing
  • Keep log files for at least 6 months
  • Report incidents to supplier and authorities immediately

Transparency

  • Inform employees in advance of AI use in the workplace
  • Make sure people know when they are dealing with AI
  • Document decisions supported by AI

Start these preparations in time - the August 2026 deadline seems far away, but these adjustments take time to implement properly.

Document your measures so you can easily demonstrate them during an audit. The best approach here varies by company. Larger companies are more likely to keep risk analyses and documentation for each individual tool. Smaller companies are more likely to work on a project basis and create 'clusters' of different AI tools.

Step 6: Monitoring and adjustment (ongoing).

Check regularly that you are still in compliance with all requirements. Adjust processes where necessary and stay abreast of new developments in laws and regulations. Much is still unclear, especially how the audit will be conducted.

By following this roadmap, you will systematically work toward AI Act compliance. Start today - good preparation will ensure that there are no surprises later.

With good preparation, you won't have any surprises later.

Getting started with the AI Act

The AI Act raises questions, but it also presents opportunities. Now you have a clear picture of the rules and a practical approach on how to continue to use AI in a way that is both effective and responsible. The legislation is not a barrier, as long as you make conscious choices and act transparently.

At Buzzlytics, we believe AI can enhance your business without sacrificing creativity or quality.

Our AI workshop provides you with practical tools and insights to integrate AI into your sales and marketing strategy. Are you ready to further integrate AI into your organization? Then sign up for the AI workshop here or first read the article Is Buzzlytics' AI workshop for me? for more information on whether the workshop is right for you and your organization.