What Challenges Does the European AI Office Face?
A Pioneer Office in Regulation
The European Commission has established the Artificial Intelligence Office with the objective of strengthening the European Union’s leadership in the field of Artificial Intelligence (AI) while enabling the development, implementation, and future uses of AI.
Thus, the AI Office plays a key role in the implementation of the AI Act, undertaking oversight and supervisory actions to ensure that the 27 Member States of the European Union comply with the AI Act. It also fosters the development and use of trustworthy AI and promotes international cooperation while protecting against potential risks, ultimately fostering social and economic benefits and innovation.
Functioning
To ensure that AI is safe and reliable, the European Commission has endowed it with a range of competencies, such as conducting assessments of general-purpose AI models, requesting information and measures from model providers, and imposing sanctions.
To this end, an organizational structure has been created, divided into five units and two advisors, which include:
- ‘Excellence in AI and Robotics’ Unit. This unit is responsible for providing support and funding for research and development to foster an ecosystem of excellence. It coordinates the GenAI4EU initiative, stimulating the development of models and their integration into innovative applications.
- ‘Regulation and Compliance’ Unit. This unit coordinates the regulatory approach to facilitate the uniform application and enforcement of the AI Act across the Union, in close collaboration with the Member States. The unit will contribute to investigations and possible infringements, managing sanctions.
- The ‘AI Safety’ Unit. This unit focuses on identifying systemic risks of highly capable general-purpose models, possible mitigation measures, and evaluation and testing approaches.
- ‘AI Innovation and Policy Coordination’ Unit. This unit oversees the implementation of the EU’s AI strategy, monitors trends and investment, stimulates AI adoption through a network of European digital innovation hubs and the creation of AI factories, and fosters an innovative ecosystem by supporting controlled testing environments and real-world trials.
- ‘AI for Social Good’ Unit. This unit is responsible for designing and implementing the AI Office’s international commitment to AI for social good, such as weather modeling, cancer diagnostics, and digital twins for reconstruction.
The AI Office is led by the Head of the AI Office and will operate under the direction of a Chief Scientific Advisor to ensure scientific excellence in model evaluation and innovative approaches, and an International Affairs Advisor to follow up on our commitment to close collaboration with international partners on trustworthy AI.
Overall, this will enable the AI Office to provide a central mechanism for coordination, control, and supervision of AI implementation in the EU.
In other words, the AI Office must oversee that general-purpose AI (GPAI) model developers comply with the rules and, if not, may require them to adopt corrective measures. In fact, to ensure coordination in the supervision of these AI systems, the AI Office may streamline communications between sectoral bodies and national authorities through the creation of central databases.
Main Objective and Challenges
It is a challenge for the European Union to shape the development, implementation, and use of Artificial Intelligence in Europe following the entry into force of the AI Regulation, as excessive regulation may discourage the creation of new companies and innovation in the European territory by creating market entry barriers and thus hindering the growth of the tech industry in Europe.
Therefore, the main objective and challenge of the AI Office is to find a balance between regulation and promoting innovation to keep Europe at the forefront of the technological revolution. In this regard, the Office’s functions include monitoring and regulating, as well as fostering innovation through the regulation of controlled testing environments and initiatives that support SMEs in developing AI systems, with the aim of promoting the socially sustainable development of these technologies.
In other words, these controlled environments or sandboxes will be made available to SMEs to facilitate training, testing, and validation of GPAI before their commercialization or deployment so that, under the supervision of the AI Office, the systems comply with legal requirements and access the market.
Moreover, one of the main challenges or difficulties that the AI Office faces in the future is to provide clarity regarding the interpretation of the AI Regulation and its interaction with other laws, such as the General Data Protection Regulation (GDPR). This connection is crucial, as the promotion of these AI systems should not result in a breach of the personal data protection of users, such as clarifying the current challenges that exist in using biometric recognition systems.
Candela es graduada en Derecho por la Universidad de Granada, donde tuvo su primer contacto con el Derecho de las Nuevas Tecnologías y Derecho Digital que, posteriormente, plasmó en la realización de su Trabajo de Fin de Grado. Le apasiona el Derecho Digital, la Propiedad Intelectual e Industrial, la Privacidad y Protección de Datos, el Derecho de la Competencia y el Comercio Electrónico.