Setting the AI standards: the underlying struggle for the future of Artificial Intelligence – Tech.eu – Tech.eu

As the EU’s AI regulation continues its legislative journey, regulators, standard-setters and innovators are called to define how Artificial Intelligence will be developed in practice.

Artificial Intelligence as a technology dates back to the 1950s. Nevertheless, only the last decade has seen AI increasingly applied in a variety of fields, mostly thanks to major advancements in com…….

npressfetimg-1305.png

As the EU’s AI regulation continues its legislative journey, regulators, standard-setters and innovators are called to define how Artificial Intelligence will be developed in practice.

Artificial Intelligence as a technology dates back to the 1950s. Nevertheless, only the last decade has seen AI increasingly applied in a variety of fields, mostly thanks to major advancements in computing power, machine learning techniques and available data.

The growing reliance on AI-powered tools has raised questions about their reliability, trustworthiness and accountability, catching the attention of regulators worldwide. In April 2021, the European Commission presented a comprehensive regulatory framework on AI with a risk-based approach largely drawn from product safety rules.

The draft Act follows a New Legislative Framework (NFL) regime whereby manufacturers need to run a conformity assessment fulfilling certain essential requirements in terms of accuracy, robustness and cybersecurity. A quality management system must also be put in place to monitor the product throughout its lifecycle.

“The proposal defines common mandatory requirements applicable to the design and development of certain AI systems before they are placed on the market that will be further operationalised through harmonised technical standards,” the draft proposal reads.

Thus, standards are to be developed to provide clear guidelines when designing AI systems. Following such standards leads to a presumption of conformity, bringing down the cost for compliance and uncertainty.

“Standardisation is arguably where the real rulemaking in the Draft AI Act will occur,” stated Michael Veale and Frederik Zuiderveen Borgesius in their thorough analysis of the AI Act.

It is therefore not surprising that the strategic role of technical standards for such a disruptive technology has attracted the attention of major regulatory powers, including the United States, Germany and China.

“The interpretative stances that will gather more traction will also help define the future of AI. This creates a geopolitical race to establish the main international ethical standards that will likely generate and increase power asymmetries as well as normative fragmentation,” reads a report of the Centre for European Policy Studies (CEPS).

Amid these geopolitical tensions, industry practitioners have been trying to work out ways to embed the new legal requirements in their business models. That is particularly the case for suppliers integrated into the AI supply chain, which according to the draft AI Act will need to support the providers with their compliance.

“If you are a third party in the AI supply chain, and you have to give assurance that you comply with your obligations, then there is a tool that helps you do it in the same format to all the providers you are servicing. Vice versa, if you are a provider, a standardised way of accepting assurance from your myriad of third parties makes your job easier,” explained John Higgins, chair of the Global Digital Foundation.

For instance, data suppliers will have to explain how the data was obtained and selected, the labelling procedure and the representativeness of the dataset. Similarly, accuracy and cybersecurity will also be key applications, as model builders will have to detail how the model was trained, its resilience to errors, measures to determine accuracy level, eventual fall-safe plans and so forth.

“Understanding which of the focus areas need support and extension requires extensive involvement from practitioners (innovators, technical experts and others) who have actually carried out AI projects and, of course, involvement of international standardisation organisation,” stressed Ott Velsberg, Chief Data Officer for the Estonian government.

The need to establish an inclusive stakeholder dialogue is also shared by EU lawmakers. Axel Voss, an influential MEP in digital affairs, proposed the creation of a European AI board including national AI regulators, data protection authorities, EU bodies, standardisation organisations and technical experts.

“What we need is an adequately resourced mechanism to supervise the uniform, EU-wide implementation and enforcement of the upcoming AI and digital laws as well as AI standards,” Voss said.

Indeed, venues for collaboration between industry practitioners and standard-setters are already developing. The Eclipse Foundation, a global community of open-source practitioners, is in talks with several standardization organisations on how to better consider international practices in standard development.

“There are some successful models that come up from open source that the community would like to become a standard. Conversely, standards in certain cases would benefit from reference implementation, but they lack the governance framework to host the standards’ correct implementation,” said Michael Plagge, ecosystem development director at the Eclipse Foundation.The standard-setting process is however not without its challenges, with timing being perhaps the most significant one. Premature efforts may fail to reflect the current state and future development of the technology while coming in too late might mean being faced with built-up infrastructure and well-established incumbent applications.

For Sebastiano Toffaletti, secretary-general at the European Digital SME Alliance, standard development needs to be inclusive, otherwise, the process will be monopolised by Big Tech companies from outside Europe.

“Standardisation is a costly exercise that involves the work of highly qualified experts for years. In a competitive development, higher investment in experts entails higher chances to influence the content of standards,” Toffaletti said.

In turn, large European companies gathered around the European Tech Alliance (EUTA) raised concerns that EU legislation, especially in terms of data protection, might put them on a backfoot with international competitors if it is not properly considered in the technical standards.

“We worry about AI datasets trained abroad, and we would like to see this issue addressed in the future AI Act by having a third-party body responsible for conducting assessments of data sets developed outside the EU,” a EUTA spokesperson said.

The AI Foundation Forum is a key opportunity to bring together regulators, standard-setters and innovators to start developing a dialogue on AI standards, a necessary exercise to build bridges between these different worlds. However, by definition a dialogue is not a one-off exercise, it is intended as a constant conversation that does not end when the standards have been agreed upon.

 Veale and Zuiderveen Borgesius note that intermediary bodies, which normally play a key role in feeding back on how standards are being applied, have a very limited role in the AI Act. For the two academics, this might result in “big gaps in knowledge flows regarding how the draft AI Act is functioning on the ground.”

AI applications are still at a very early stage. The upcoming harmonised standards will not only need to be built by consensus, to earn the buy-in from the relevant industry stakeholders, but also be able to flexibly adapt to the future developments in a fast-paced innovation ecosystem.

 

Source: https://tech.eu/free/44836/setting-the-ai-standards-the-underlying-struggle-for-the-future-of-artificial-intelligence/