Ireland’s Federated Approach to AI Regulation
Tuesday, 11 March 2025
On Tuesday 4 March 2025, the Government approved the designation of an initial list of eight public bodies to act as competent authorities under the EU Artificial Intelligence (AI) Act (the “Act”) who will be responsible for implementing and enforcing the Act within their respective sectors. In its announcement on the Department of Enterprise, Tourism and Employment (the “Department”) website the Department indicated that additional authorities and a lead regulator would be designated by a future government decision. In accordance with article 70(1) of the Act, Member States are required, by 2 August 2025, to appoint at least one Notifying Authority and at least one Market Surveillance Authority to perform the role of National Competent Authorities who are required to exercise their powers “independently, impartially and without bias” so as to safeguard the objectivity of their activities and tasks. In addition to this Member States are required to appoint, from amongst the market surveillance authorities nominated, one single point of contact for the Regulation.
The initial bodies designated are as national competent authorities are:
- The Central Bank of Ireland,
- The Commission for Communications Regulation,
- The Commission for Railway Regulation,
- The Competition and Consumer Protection Commission,
- The Data Protection Commission,
- The Health and Safety Authority,
- The Health Products Regulatory Authority, and
- The Marine Survey Office of the Department of Transport.
This move indicates that Ireland will adopt a federated approach to regulation in the AI space with established regulators performing the role of National Competent Authorities in their individual sectors. Similar to the approach to be adopted in respect of the NIS2 Directive, which governs cybersecurity in the EU, the federated approach relies heavily on the knowledge of established regulators of the entities within their sectors and the challenges and issues faced by those entities. There are certain benefits to this system as it allows entities to interact with regulators with which they are already familiar as they tackle the challenges presented by the adoption of AI.
However, the adoption of this approach, perhaps more so than in respect of cybersecurity (which benefits from well-established technical standards and operational expertise), also poses unique challenges to those regulators who were not established with the regulation of AI in mind and who lack the institutional knowledge and operational capability to perform the role from the outset. To adequately perform the role of a National Competent Authority in respect of the Act it is highly likely that significant investment in terms of manpower, expertise and ongoing training will be required for the bodies designated, none of which have any particular expertise in AI at this point.
In this regard Article 70(3) of the Regulation requires Member States to provide to National Competent Authorities "adequate technical, financial and human resources, and with infrastructure to fulfil their tasks effectively".
Clearly, with AI Regulation in its infancy and AI Adoption only truly beginning to take off in Ireland, it will be vital that the Government puts in place these supports and resources for the above bodies, and those added to this list, to enable them to effectively implement and enforce the Act to ensure the safe adoption of AI in the State. It remains to be seen whether Regulation of Artificial Intelligence Bill, currently being prepared by the Government and which will officially designate national competent authorities, will offer further clarity as to the proposed role to be played by national competent authorities under this model of regulation or further detail as to the nature of supports that will be provided by the Government in this regard.
While Government is progressing the regulatory framework, it is clear that some sectors are already moving on this subject. For example, we are aware that the Minister for Health has tasked the Health Information and Quality Authority (HIQA) to develop guidance to promote the responsible and safe use of AI in health and social care. It is expected that this will include engagement with key stakeholder groups and is expected to be shared with the public later this year. The Health Service Executive (HSE) has also established an AI and Automation Centre of Excellence (CoE) to collate, assess and implement the responsible and safe use of AI in delivering health services. In addition to this, the Department of Health and the HSE have commenced the development of a strategy for AI in Health (as was committed to in the Programme for Government) to promote the use of AI in healthcare and the responsible and safe use in a number of areas including Clinical and Patient Care, Operations and Administration, Research and Development, Patient Engagement and Experience and Public Health. It will be vital that other sectors are as proactive in developing AI strategies in accordance with the EU AI Act to ensure the smooth and safe adoption of AI in the State.
The Artificial Intelligence team in Byrne Wallace Shields LLP is actively working and advising in this area and is well placed to assist its clients in addressing the impacts and potential impact of the Act.