As artificial intelligence (AI) becomes increasingly integrated into business and consumer goods, it accelerates innovation while also introducing new categories of risk. In response, authorities are updating safety regulations to address these modern challenges. To reflect this new reality, the European Union has updated its product safety legislation with the General Product Safety Regulation (Regulation (EU) 2023/988, GPSR), which took effect in December 2024.
The GPSR modernizes product safety law for the digital age. Replacing the 2001 General Product Safety Directive, it broadens the definition of "safety" to encompass not only traditional physical concerns but also cybersecurity, mental health, and social well-being. For companies that develop, produce, or market AI-enabled products, this regulation represents a significant shift in responsibilities. The risks are tangible: algorithmic bias could lead a smart camera to misidentify a customer; performance drift might cause predictive maintenance tools to overlook equipment failures; automation bias could lead workers to uncritically follow AI recommendations; and cybersecurity vulnerabilities may expose devices to hacking. By incorporating these risks into the definition of safety, the GPSR mandates that businesses treat AI safety with the same gravity as traditional fire, electrical, or mechanical hazards.
The new regulatory landscape for AI-enabled products is shaped by both the EU AI Act and the GPSR. While the AI Act focuses on the trustworthiness and fundamental rights implications of AI systems, the GPSR ensures that AI does not compromise the safety of consumer products. For enterprises, this means compliance is no longer about checking boxes but about integrating AI safety throughout the entire product lifecycle. Businesses that adopt these practices early will not only mitigate financial and legal risks but also build trust with regulators and consumers.
Why AI Safety Is Central to the GPSR
Traditionally, product safety regulations focused on preventing physical harm from risks like electric shocks, fire, and mechanical failures. However, the rise of connected and AI-enabled devices introduces new risk profiles that the GPSR now explicitly addresses. Key risks include:
- Bias and misclassification: An AI model that incorrectly identifies individuals, objects, or behaviors can lead to harmful or ineffective outcomes.
- Automation bias: Users may over-rely on AI recommendations, disregarding their own judgment or established safety protocols.
- Performance drift: Models that adapt to new data may become less accurate or reliable over time.
- Contextual misapplication: Systems may not perform as intended when used in different environments, such as a new building type, climate, or cultural context.
- Privacy and surveillance risks: Connected devices may inadvertently record sensitive information, exposing users or eroding trust.
- Cybersecurity flaws: Hacking, data manipulation, or adversarial attacks can cause unintended and harmful system behavior.

Key GPSR Obligations for Businesses with AI-Embedded Products
The GPSR establishes a series of concrete duties that businesses must integrate into their product design and operational workflows. These are not abstract principles but actionable requirements that determine a product's eligibility to remain on the EU market. For companies working with AI-enabled or connected devices, the following obligations are particularly critical:
1. Lifecycle Safety (Article 5): Products must remain safe not only at launch but also after software updates, model retraining, or new data inputs. The old mindset of "once safe, always safe" is no longer sufficient.
2. Ongoing Risk Evaluation (Articles 6 and 9): Risk assessments must now account for predictable misuse, bias, performance drift, automation bias, and impacts on mental or social well-being. This documentation must be maintained for ten years.
3. Post-Market Safety and Monitoring (Articles 19–22): Manufacturers are required to implement post-market surveillance (PMS) systems to identify emerging hazards, report incidents, and participate in the EU Safety Gate rapid alert system.
4. Shared Responsibility for Modifications (Article 13): A substantial modification—such as retraining an AI model, releasing a major software update, or integrating a third-party IoT device—can legally designate an entity as the new "manufacturer," transferring full responsibility for GPSR compliance.
In practice, this means every update, whether a simple patch or a major AI model retraining, can trigger new safety obligations and shift liability. Compliance cannot be an afterthought; it must be integrated into the engineering pipeline, ensuring that safety checks, testing, and documentation are part of every release cycle.
5. Transparency and User Information (Article 8, Recital 31): Users must be provided with clear, understandable information about an AI's capabilities, limitations, false alarm rates, and data collection practices.
6. Supply Chain Accountability (Articles 7–12, 10): Obligations extend to all economic operators, including manufacturers, importers, distributors, and online platforms. Each actor is responsible for ensuring compliance and maintaining traceability of components and responsibilities.
The GPSR does not operate in isolation. It functions alongside other key EU frameworks, including the AI Act, the Cyber Resilience Act (CRA), and the revised Product Liability Directive (PLD). Together, these regulations form a comprehensive safety and accountability network for AI-enabled products.
Table 1: Key GPSR Obligations for Businesses with AI-Embedded Products
| Obligation | Article | Core Requirement | What It Means for AI Products |
|---|---|---|---|
| Lifecycle Safety | Art. 5 | Products must remain safe after updates, retraining, or data changes. | Each AI update triggers renewed safety verification. |
| Ongoing Risk Evaluation | Arts. 6 & 9 | Continuous assessment of bias, drift, misuse, and mental-health effects. | Maintain risk logs and keep records for 10 years. |
| Post-Market Monitoring | Arts. 19–22 | Set up systems to detect, report, and act on incidents. | Integrate PMS and Safety Gate reporting into QA. |
| Shared Responsibility | Art. 13 | Major modifications can transfer "manufacturer" liability. | Retraining or IoT integration may shift compliance duties. |
| Transparency to Users | Art. 8, Rec. 31 | Provide clear info on AI functions, false-alarm rates, and data use. | Update user manuals and digital notices. |
| Supply-Chain Accountability | Arts. 7–12 | All economic operators share compliance duties. | Ensure traceability and updated contracts. |
Table 2: How GPSR Fits into the EU's Digital Regulatory Landscape
| Regulation | Core Focus | GPSR Connection |
|---|---|---|
| GPSR (2023/988) | Consumer product safety (physical + AI risks) | Baseline safety net for all products, including AI-enabled. |
| AI Act (2024/1689) | Trustworthiness & fundamental rights in AI | Governs how AI systems are designed & documented. |
| Cyber Resilience Act (CRA) | Cybersecurity of connected products | GPSR requires cybersecurity as part of safety. CRA sets detailed obligations. |
| Product Liability Directive (PLD, revised 2022) | Civil liability for defective AI/tech products | GPSR non-compliance increases exposure under PLD. |
In essence, the GPSR transforms AI safety from a one-time certification exercise into a continuous compliance discipline. To access the EU market, companies must demonstrate that their AI-enabled products are safe throughout their entire lifecycle, supported by traceable documentation and active monitoring. For executives, this positions compliance as a prerequisite for market access, not an afterthought. AI safety must be engineered into every update and retraining cycle, requiring supply-chain coordination and transparent communication to maintain regulatory confidence and consumer trust. Early adopters who build these capabilities now will not only reduce legal exposure but also gain a strategic advantage as reliable, future-ready innovators under the EU’s evolving product safety framework.
Why This Matters for Businesses
GPSR compliance carries significant implications beyond legal formalities. Companies that fail to meet these standards face financial, operational, and reputational consequences that can directly impact their market access and long-term competitiveness. Authorities can impose fines of up to 4% of global annual turnover (Article 44), restrict sales, or mandate recalls and market withdrawals (Articles 32–36). These enforcement actions are publicized through the EU Safety Gate portal, making incidents immediately visible to regulators, competitors, and consumers, which can lead to a swift loss of trust and investor confidence. The GPSR elevates AI safety from a technical issue to a strategic business imperative. Without demonstrating conformity, a product cannot receive the CE mark, and without the CE mark, it cannot be legally sold in the EU. For executives, this makes compliance a non-negotiable condition for market access. Consider, for instance, smart toys withdrawn from EU shelves due to undisclosed microphones or insecure data practices. A single non-compliance incident can trigger a sales ban, public exposure, and lasting brand damage.
Viewed through this lens, AI safety under the GPSR is a matter of business survival. Companies that act proactively by embedding risk assessment, documentation, and monitoring into their engineering cycles protect both their regulatory standing and their brand integrity. These early movers also gain a trust advantage with consumers and authorities, demonstrating a commitment to transparency and accountability in a rapidly evolving regulatory landscape.
How Businesses Can Prepare
The GPSR was adopted in May 2023 and entered into force on December 13, 2024, replacing the previous 2001 Directive. From this date, all businesses placing products on the EU market—including AI-enabled and connected devices—must comply with its expanded safety obligations. Market surveillance under the GPSR has already begun, with authorities expected to increase inspections throughout 2025 (Figure 2).

Figure 2 underscores that companies should treat 2025 as the year of GPSR readiness. Technical files, AI risk assessments, and post-market monitoring systems must be organized now to protect market access and prevent costly disruptions. With enforcement and inspections set to intensify, a methodical and proactive approach to AI safety assessments is essential for fulfilling GPSR responsibilities.
For example, technical documentation for AI models, training data, updates, and cybersecurity measures must be retained for 10 years. This requires a robust governance framework for record-keeping. Similarly, AI-specific risk evaluations must extend beyond conventional physical threats, and designing how these assessments will be implemented for both existing and future products is crucial for avoiding market entry delays. Robust post-market monitoring systems are also necessary to record incidents, track real-world performance, and adapt to emerging hazards. To ensure accountability, businesses must revise contracts with distributors and suppliers to clearly define roles and responsibilities for updates and cybersecurity. Finally, transparency with consumers is key: companies must provide clear, easy-to-understand information on AI functions, limitations, and data management. This methodical approach not only ensures legal compliance but also builds customer confidence and long-term market viability.
Beyond compliance, companies that communicate their commitment to AI safety can turn regulation into a competitive advantage. Customers are more likely to trust and remain loyal to brands that demonstrate a dedication to safety and do not take shortcuts.
What Businesses Should Do Today to Prepare for GPSR
Compliance with the GPSR may seem daunting, but it can be achieved through a structured, practical, and efficiently executed process. At Nemko Digital, we don't just explain compliance; we work alongside your teams to get it organized and ready. Our GPSR AI Safety Assessment Framework provides businesses with a clear, actionable path to GPSR readiness. Here is a brief overview of how it works.
Our GPSR AI Safety Assessment Framework – What it entails
- Audit existing documentation – Ensure your technical files capture AI models, updates, and cybersecurity safeguards (Article 9).
- Expand risk assessments – Integrate evaluations for bias, performance drift, and automation bias into your existing safety processes (Articles 6 & 9).
- Set up post-market monitoring – Establish processes for incident logging, Safety Gate reporting, and continuous AI performance checks (Articles 19–22).
- Review supply chain contracts – Clarify responsibilities for updates, modifications, and compliance across all actors (Articles 7–13).
- Strengthen customer transparency – Update manuals, warnings, and client communication to explain AI limits and data handling (Article 8, Recital 31).
Conducted over a short engagement cycle, the assessment benchmarks your existing product documentation, risk evaluations, AI lifecycle controls, and governance mechanisms against the regulation's key provisions. The evaluation delivers a precise view of compliance readiness. The outcome is a concise, executive-level readiness report and compliance evaluation statement, forming a solid foundation for demonstrating conformity during market surveillance or CE-marking processes.
This is not theory; it's a step-by-step playbook we execute together. With our expertise, businesses that start early can transform GPSR compliance into a trust and market advantage, securing EU market access with confidence.
To make this manageable, and so that companies don't need to start from scratch, we offer ready-to-use GPSR safety assessment checklists and monitoring templates that can be plugged directly into workflows. These tools help teams systematically address lifecycle risks, document compliance, and maintain audit readiness without slowing down development.
Our assessment, and with tailored guidance, helps you establish clear governance roles: who owns and updates the technical documentation, who signs off major updates, and who maintains the ongoing risk log. We know that without clear accountability, compliance processes often fall through the cracks during product updates, so we help you avoid such risk.
Compliance may look complex, but it's also an opportunity: businesses that act early can turn GPSR readiness into a trust and market advantage. We support companies in building these frameworks step by step, ensuring stronger relationships with authorities and customers.
At Nemko Digital, we typically help clients through our structured GPSR AI Safety Assessment and Evaluation. This process provides leadership teams with a clear, evidence-based understanding of their organisation's position and gaps under the General Product Safety Regulation (EU 2023/988).
Final Takeaway
With the implementation of the GPSR, AI safety has become as critical as physical safety. Companies offering connected or AI-enabled devices in the EU must recognize that compliance is a continuous journey, not a one-time event. When managed proactively, AI safety assessments under the GPSR can be more than a regulatory exercise; they can become a competitive advantage that demonstrates your products are reliable, transparent, and safe in a connected world.
This is where Nemko Digital supports companies: by translating abstract regulatory obligations into practical, technical, and governance steps. From audit-ready documentation templates to clear governance frameworks, we help businesses not only meet GPSR requirements but also turn compliance into a strategic advantage.
AI Expert Authors
Mónica Fernández Peñalver
Mónica has actively been involved in projects that advocate for and advance Responsible AI through research, education, and policy. Before joining Nemko, she dedicated herself to exploring the ethical, legal, and social challenges of AI fairness for the detection and mitigation of bias. She holds a master’s degree in Artificial Intelligence from Radboud University and a bachelor’s degree in Neuroscience from the University of Edinburgh.
Shruti Kakade
Shruti has actively been involved in projects that advocate for and advance AI Ethics through data-driven research and policy. Before starting her Master's, she worked on interdisciplinary applications of Data science and Analytics. She holds a Master's degree in Data Science for Public Policy from the Hertie School of Governance, Berlin and a bachelor’s degree in Computer Engineering from the Pune Institute of Computer Technology, India.