article

Cybersecurity

The EU Leads the Way with AI Act, and That’s a Good Thing

Woman talking on the phone in a city next to skyscrapers.

Nicholas Karlsen

Director of Cyber Security & Compliance, NNIT

Sofie Bøttiger Conlan

Senior Cybersecurity and Compliance Consultant, NNIT

AI technology comes with great potential, but also considerable risk. Some say the possibilities are endless; at the very least, AI is revolutionary in our time in the same way as electricity and the steam engine in the 19th Century. That is why we should embrace regulation and not wait and see what happens when risks become reality.

Imagine not quite knowing the potential or effects of a new, omnipotent technology and not setting boundaries for its implementation and use! Would you get behind the wheel of a car with no brakes and drive it in a province with no traffic regulations?

The EU’s AI Act is a first, responsible stab at setting some ground rules to guide current and future implementation of AI systems. And it is pre-emptive rather than reactive, which is quite impressive considering 27 nations had to arrive at a consensus in a short time span. Put simply, we see it as sensible versus reckless implementation.

Harmonizing implementation across EU as well as taking the responsible lead on acceptable use of AI technology makes a lot of sense, and here is why:

Clear Guidelines for Acceptable Use

The AI Act, while far from perfect, sets limits for unacceptable and high-risk use of AI . With its four risk categories, the AI Act outlines what constitutes unacceptable and high-risk behavior.

In that sense, the AI Act provides a map to navigate between right and wrong when entering the AI territory and takes a stand on the more dubious potential uses of AI to violate human rights, including physical and emotional surveillance and manipulation. Any such use is highly regulated and comes with a whole set of requirements for detailed and documented risk reduction, data validation, activity logging, information procedures, human controls etc.

And this allows companies and organizations to identify acceptable categories and design their systems to stay within the boundaries of acceptable use.

Protection of Well-established Human and Legal Rights

In many ways, the AI Act takes its cue from other EU legislation such as the Human Rights conventions and GDPR. In that sense, the AI Act is neither radical nor surprising – it imposes a set of regulations that to a large degree were already in place in the physical world.

The European consensus on and protection of human rights, data privacy, non-discriminatory environments and freedom of speech logically extends to AI system use with the AI Act.

Compliance Ensures Responsible Choices

While some may see the AI Act as more red tape and as hampering AI innovation, we see it as a prudent approach to implementation of new technology – an example to follow for other territories that have yet to introduce regulation.

When compared to established conventions and norms across the EU, we do not see the risk categories as controversial or restrictive. Yes, the AI Act requires companies and organizations to be mindful and take necessary steps to ensure compliance, but compliance also ensures responsible choices. Unbridled use of AI should not be the norm.

AI Management Systems Based on ISO/IEC 42001 is a Good Start

We acknowledge that in particular small and medium-sized enterprises may find it difficult to get started with implementing trustworthy AI, even with the AI Act in hand. Like any law text, it takes a lot of practice to read and implement the rules in the right way. At the same time, we feel confident that this aspect will only improve with experience, and as updates are made in the years to come.

A good place to start is to consider the new ISO/IEC 42001 standard, which specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS). An AIMS based on ISO/IEC 42001 ensures accountability and responsibility, it improves decision-making on AI, ensures continuous learning in a vastly evolving field, and cements commitment to the design, development and deployment of trustworthy AI.   

Proceeding without Risk Management is not an Option

Finally, and in addition to the AI Act, we recommend you do not abandon all common sense and human control. AI systems may simulate human intelligence and the ability to draw logical conclusions (extrapolation), but in reality, AI systems only know what they know or have been trained to know (interpolation) – and what they know may very well be false and/or biased.

We will leave you with a link to Tech Policy Press’ article on AI risk management centered on the case of Enron – and what happens when potential and risk is not properly balanced,  and company values and intentions are not practiced and upheld (with the help of official regulation): What today’s AI companies can learn from the fall of Enron

Our point is, embrace the AI Act and ISO ISO/IEC 42001 standard as tools to ensure responsible choices and to save you having to develop a comprehensive AI risk management system yourself. Proceeding without is not a responsible option.

When does the AI Act enter into force?

The EU AI Act was adopted in 2024 and entered into force on 1 August 2024.

Because the AI Act is a regulation, it does not first need to be implemented into national law by the Member States, but applies directly across the EU. As a general rule, the regulation will be fully applicable from 2 August 2026. However, certain rules have already entered into application, while others will apply at a later stage:

  • The prohibition of AI systems with an unacceptable level of risk and the requirement for AI literacy, which applied from 2 February 2025.

  • Codes of practice, which applied after 9 months.

  • The rules for GPAI models, which applied from 2 August 2025.

  • Most other rules, which will apply from 2 August 2026.

  • Requirements for certain high-risk AI systems, including AI systems that are embedded in safety-critical regulated products, such as medical devices or vehicles, which will apply from 2 August 2027.

It is our clear recommendation to begin the preparatory work as soon as possible. Several requirements already apply, while others will be phased in gradually towards 2026 and 2027.

The Danish law establishing the framework for the national implementation and enforcement of the Regulation entered into force on 2 August 2025. Denmark was the first EU country to carry out national implementation of the Regulation, underlining that AI regulation is a politically prioritised area. At the same time, a proposal has been introduced to update Danish legislation in step with the application of rules on, among other things, transparency and certain high-risk AI systems from 2 August 2026. The proposal will expand the national enforcement and sanctions framework, which in the first instance has primarily been aimed at prohibited AI practices.

How can we help you?

Talk to a Cyber security analyst or similar specialist today.

When you submit your inquiry to NNIT via the contact form, NNIT process the collected personal data in accordance with the Privacy Notice, where you can read more about your rights and how NNIT process your personal data.