logo

A legal analysis following the EU Artificial Intelligence Act (AI Act)

LetsLaw / Digital Law  / A legal analysis following the EU Artificial Intelligence Act (AI Act)
Artificial Intelligence Act (AI Act)

A legal analysis following the EU Artificial Intelligence Act (AI Act)

The adoption of Regulation (EU) 2024/1689 (the AI Act) reshapes the liability framework associated with artificial intelligence in the EU. Going forward, it will no longer be sufficient to invoke “best practices” or “ethical use”: it will be essential to demonstrate, through traceability and evidence, that the system has been designed, placed on the market, and used in accordance with a framework of due diligence, governance, and risk management.

As regards timing, the AI Act provides that it will enter into force twenty days after its publication in the Official Journal of the European Union and that it will apply generally from 2 August 2026, with staggered obligations anticipated as early as 2025. Accordingly, the period 2025–2026 operates as a transition phase towards increasingly enforceable compliance.

Within this framework, three vectors are already consolidating:

  1. The allocation of duties according to each actor’s role in the value chain.
  2. The progressive insurability of risk through the standardisation of controls.
  3. Ex ante assessment as a legal threshold for market entry.

Allocation of Responsibilities: “Proper Use” as a Duty of Care

The AI Act is built on a central premise: liability is organised according to roles and control within the value chain, not solely by reference to the ultimate harm. The Regulation imposes obligations on providers, deployers (professional users), importers, distributors, and other operators, including those established outside the EU where the system is placed on the market or used within the Union.

“Proper use” is legally articulated along two axes of diligence:

  1. On the one hand, the provider must place on the market a system with safeguards appropriate to its risk profile.
  2. On the other, the deployer determines the context of use and must avoid improper or deviating uses, ensure meaningful human oversight, and comply with the applicable conditions and instructions (particularly in sensitive domains).

 

For high-risk systems, the standard is heightened: the AI Act requires a continuous risk management system throughout the entire lifecycle. In practice, this shifts the focus towards evidentiary matters: it will be decisive to demonstrate which risks were identified, how they were assessed, which measures were adopted, and how these were reviewed in light of changes to the system or its deployment context.

In addition, for certain high-risk deployments, a Fundamental Rights Impact Assessment (FRIA) is required prior to use, particularly in the public sector and in certain private entities providing public services. The FRIA must set out the intended use, the groups potentially affected, the risks to fundamental rights, the human oversight measures in place, and the mitigation actions. If it is missing or treated as a mere formality, it may increase both regulatory and civil liability risk.

Moreover, the AI Act does not replace other legal regimes: the application of the GDPR, privacy rules, and other frameworks (consumer, contractual, and product liability) remains in force, making “multi-front” liability scenarios increasingly common.

Insurance risk: from uncertainty to auditability

The insurance market is reconfiguring coverage because AI-related risk is rarely purely technical. It typically manifests as a composite risk: third-party harm, contractual breaches, security failures, discrimination claims or challenges linked to automated decision-making, and, particularly in regulated sectors, exposure to corrective or sanctioning measures.

Here, the AI Act produces an indirect but decisive effect: it introduces a regulatory standard that facilitates the auditability of risk and, therefore, its insurability, provided that the organisation can demonstrate governance and controls. In this vein, EIOPA has promoted criteria to integrate AI into existing internal control and risk management frameworks within the insurance sector.

In parallel, Directive (EU) 2024/2853 on liability for defective products broadens the concept of “product” to cover, inter alia, software, thereby reinforcing exposure to strict liability in certain scenarios. Consequently, the importance of a coherent strategy that aligns technical compliance, contractual design, and insurance management is heightened.

Ex Ante Assessment and Lifecycle Approach: The New Market-Entry Threshold

From a business perspective, the most significant change is that placing a system on the market ceases to be a purely technological milestone and becomes a legal diligence threshold. For high-risk systems, the minimum requirement is structured around the risk management system: a continuous, documented, and updatable process to identify and mitigate risks, subject to periodic review.

Where required, the FRIA functions as the connecting element between the system and fundamental rights in the specific case. And where personal data are involved, alignment with the GDPR is indispensable: a project may be reasonably aligned with the AI Act and yet still be non-compliant due to deficiencies regarding lawful basis, transparency, data minimisation, or the exercise of data subject rights.

Finally, compliance does not end at launch. The AI Act imposes a lifecycle approach: monitoring, detection of deviations, review in response to significant changes, and the capacity to respond to incidents. In short, this is the approach that will, in practice, distinguish merely “pilot-ready” projects from those that are truly “scalable” in an increasingly demanding regulatory environment.

Contact Us

    By clicking on "Send" you accept our Privacy Policy - + Info

    I agree to receive outlined commercial communications from LETSLAW, S.L. in accordance with the provisions of our Privacy Policy - + Info