Why AI Trust Must Extend Beyond Algorithms
Discussions around artificial intelligence often center on data protection, ethics, and algorithmic transparency. However, as AI systems become increasingly embedded in physical products, another regulatory dimension is gaining prominence: product regulation.
AI is no longer confined to digital platforms or decision-support software. It is now integrated into a vast array of physical systems, including industrial machinery, consumer electronics, HVAC systems, medical devices, smart cameras, robotics, and IoT devices.
When AI becomes part of a regulated product, compliance is no longer governed solely by AI-specific legislation. Instead, it intersects with established product safety frameworks, market regulation, and other digital regulations (on data and cyber) that collectively shape the requirements for AI-embedded products. This intersection significantly reshapes how AI must be designed, assessed, and maintained throughout its lifecycle.
From Software Governance to Product Governance
Traditional AI governance frameworks focus on mitigating bias and discrimination, ensuring transparency and explainability, enabling human oversight, and protecting data and fundamental rights. These principles are essential for the development and deployment of trustworthy AI.
Product regulation, by contrast, concentrates on mechanical and electrical safety, pressure containment, electromagnetic compatibility, cybersecurity, and minimizing physical risk to people or property, all validated through conformity assessments before market access.
When AI is embedded in hardware, these two governance worlds converge. The compliance analysis must therefore expand from, "Is the model fair?" to also include critical product safety questions:
- What happens if the model malfunctions?
- Does the AI influence safety-critical parameters?
- Could the model's adaptive behavior invalidate a prior safety certification?
- Does integrating AI change the product's regulatory classification?
These are fundamental product safety questions, not purely AI governance issues.
How the EU AI Act Classifies AI in Products
When discussing EU AI regulation, the primary reference is the EU AI Act, the first comprehensive, horizontal regulation governing AI systems across the European Union. The Act establishes a risk-based framework that classifies AI systems into prohibited, high-risk, limited-risk, and minimal-risk categories, with escalating compliance obligations.
While the EU AI Act provides a harmonized framework, Member States are also developing complementary initiatives to operationalize supervision and enforcement.
Examples of National Implementation
Italy has adopted a national AI law (132/2025) that clarifies sectoral oversight, introduces enforcement mechanisms (including sanctions for misuse), and designates competent authorities for implementation.
Similarly, Spain has established the Spanish Agency for Artificial Intelligence Supervision (AESIA) and is advancing national governance measures to support the enforcement of the AI Act.
These national initiatives do not replace the AI Act. Rather, they clarify how it is applied in practice — including how AI embedded in regulated products is supervised and assessed.
For manufacturers, this reinforces a critical point: AI compliance must be monitored not only at the EU level but also through the lens of national enforcement and sectoral interpretation.
Under the AI Act, an AI system is classified as high-risk under Article 6 if:
It is intended to be used as a safety component of a product subject to third-party conformity assessment under EU harmonization legislation; or
It is itself a product covered by EU harmonization legislation listed in Annex I and subject to third-party conformity assessment.
This is precisely where AI regulation and product law converge.
The "Safety Component" Threshold
The AI Act defines a safety component as a component that fulfills a safety function or whose failure could endanger health, safety, or property.
For embedded AI, the practical assessment of this threshold includes:
- Does the AI influence protective mechanisms?
- Does it control safety-relevant operating limits?
- Could malfunction create hazardous conditions?
- Is it part of a safety-related control system?
If so, the AI system may qualify as high-risk.
An AI system may also qualify as high-risk when it is itself a regulated product under sectoral legislation. A key example is the Medical Device Regulation, which recognizes standalone software as a medical device when intended for medical purposes.
Regulation (EU) 2017/745 on Medical Devices
"Software in its own right, when specifically intended by the manufacturer to be used for one or more of the medical purposes set out in the definition of a medical device, qualifies as a medical device." (Preamble 19)
"'Medical device' means any instrument, apparatus, appliance, software, implant, reagent, material or other article intended by the manufacturer to be used, alone or in combination, for human beings for one or more of the following specific medical purposes (...)" (Article 2)
This definition may be used to support the interpretation that an AI system alone or embedded in a product used for medical purposes can itself be considered a product covered by Union harmonization legislation under the AI Act.
However, this interpretation remains subject to further guidance. While current examples indicate the intended direction, additional clarification from the European Commission is expected.
In practice, intended purpose and the functional role of the AI system remain central to evaluating its regulatory implications in other product regulation.
Beyond the AI Act: The Broader Shift in Product Regulation
Focusing exclusively on the AI Act risks missing the larger regulatory transformation underway.
The AI Act is just one piece of the puzzle.
Even if an embedded AI system does not qualify as high-risk under the AI Act, product legislation may still impose significant obligations. AI increasingly interacts with established frameworks governing machinery, medical devices, radio equipment, pressure equipment, general consumer safety, cybersecurity, and market surveillance of products.
Most of these frameworks were not written with adaptive, self-learning systems in mind — yet they now apply to products that contain them.
The real transformation is not simply that AI is regulated.
It is that product regulation must now account for adaptive, probabilistic, software-driven behavior inside physical systems. We already see this shift happening by taking a look at the General Product Safety Regulation (GPSR), that came into force December 13, 2024. It states that products must not pose risks to physical or mental health, explicitly addressing hazards from AI algorithms and cybersecurity threats. Under this regulation, even non-high-risk AI must consider: 1) potential risks from algorithmic bias or faulty decision-making, 2) unintended behaviors in AI system deployment, and 3) comprehensive safety monitoring and rapid response capabilities.
Over time, product frameworks, like the GPSR, will inevitably evolve to accommodate the reality of AI embedded in products. Make sure to stay alert of these changes by monitoring the regulatory landscape.
The Core Challenge: Dynamic Systems in Static Frameworks
Traditional product conformity frameworks assume: Stable system architecture, predictable behavior, fixed risk profiles, clearly defined updates...
But AI challenges those assumptions.
Adaptive behavior, data-driven optimization, remote updates, and model retraining introduce regulatory questions such as:
- When does a software update trigger re-certification?
- Can post-market learning affect the original safety assessment?
- How should manufacturers manage version control for adaptive systems?
- What constitutes a "substantial modification" in AI-enabled products?
These questions extend far beyond the AI Act and will increasingly be addressed through evolving standards and industry practice.
The Key Takeaway
As AI continues to move from digital platforms into safety-critical hardware, organizations must transition from isolated AI governance to AI-enabled product governance by design.
Those who treat AI as merely a software feature will struggle.
Those who integrate AI and their lifecycle into their product compliance journey — from design to decommissioning — will define the next generation of trusted intelligent products.