Rapid progress in the field of artificial intelligence (AI) has led to the launch of numerous initiatives and national strategies over recent years. Klaus Heine* sets out the points of emphasis in AI strategies being set in different countries, and where the opportunities and challenges lie in a European AI strategy.
In Europe, the path in AI research and application takes a different route to that in the United States or China, for example. In the USA, large, commercially organised tech giants have been created, and in China, the new technologies are being guided by the state in private companies. Both approaches have revealed their own difficulties over recent years. In the USA there are manifold competition policy problems associated with the tech giants (e.g. Google and Facebook), while in China there are concerns that the intertwinement between the government and the tech firms may distort the rule of law and the protection of individuals from the government (e.g. Huawei 5G infrastructures). By contrast, the European and Dutch AI strategy put the individual with its needs and freedom centre stage of all AI applications! Admittedly, this aim does not immediately materialize in concrete implications for Europe’s (and the Dutch) technological and economic competitiveness vis-a-vis the USA and China. But it is a unique approach that may turn out as successful in the long run, if it is translated into law and regulations as, for example, envisaged in the EU’s White Paper on AI.
Trust into law and regulations are key
The European AI strategy has a key feature in making use of new technological achievements, which inspires trust and confidence in seeking greater competitiveness and inclusion. This feature makes trust and acceptance on the part of individual citizens a main precondition for embedding AI in the economy and society in a sustainable way. This embedding becomes significant if the AI strategy’s objective is to secure competitive advantages for Europe over the long term. The EU Commission’s white paper on the matter of AI, which develops a concept for the social embedding of AI technology, relies heavily on this specific approach that is designed for the long term.
Democracy and participation not only provide the central mechanisms for the implementation of AI technologies in the day-to-day lives of people, but are also intended to render AI’s novel risks. In this way, regulation and technological innovation are understood as complementary components of a European AI strategy.
Promotion of cross-border cooperation in research needed
Cooperation in the areas of research and application between European partner countries in general, and the major national research institutes in particular, merits special attention. In this regard, it should be noted that at the level of research itself – for example in the context of the Horizon 2020 European research initiative and the planned Horizon Europe research programme – a large number of joint research projects have been launched, and more are planned. In contrast, there is a backlog in the area of application-oriented interstate cooperation in Europe.
Past milestones as examples for new approaches
Artificial intelligence and big data are not only changing business models (e.g. the platform economy) around the world; these new technologies are also having a disruptive effect on our legal systems by posing a challenge to existing legal doctrines. New legal concepts and approaches are needed to meet the challenges, to ensure that the new technologies are put to work for the benefit of all of society.
However, there is considerable resistance in jurisprudence and among legal practitioners to questioning long-established patterns of legal doctrine, and in consequence to tolerating uncertainty in accustomed routines. This may be compared to the lack of courage of business in trying out new AI business models on the market, because immediate success is not assured. This is regrettable as other countries are a step ahead of the EU in researching and applying AI technologies. This means it will not be easy to catch up in the area of AI business applications. On the other hand, Europe could take on a pioneering role with respect to the legal dimension of AI, thereby becoming a significant initiator for new modes of liability law and new ways of doing competition policy.
Defining legal personality and responsibility for AI
A response to two major sets of issues is required over the years ahead. In the first instance, the legal personality of AI and the associated accountability of AI-based systems need to be clarified. This issue is similar in scope to the situation 200 years ago, as to whether companies could have their own legal personality, and if so, in what form. Second, alongside the question of AI’s legal personality, that of data ownership is also central – for this, the question of whether and under what circumstances data are the property of the individual or of society as a whole need to be debated. In this regard, decisions must be taken depending on the categories of data.
The EU as initiator for legal innovation on AI and big data
AI and big data in combination, have the capacity to be both a blessing and a curse. There is a lot to be said for considering whether data ownership should not, in principle, be transferred to an independent European body. This could grant the data use rights either to state authorities or to private individuals, possibly in accordance with the specific interests pursued by companies. In this way risk classes, certain ethical standards, and privacy and technical demands among other aspects, can be taken into consideration; the idea of a joint digital market in the EU could also be implemented based on this solution. The EU still has every opportunity to become world-leading in the legal regulation of new technologies.
* Dr Klaus Heine is Professor of Law and Economics at the Erasmus School of Law in Rotterdam, Jean Monnet Chair of Economic Analysis of European Integration and Co-Director of the Jean Monnet Centre of Excellence on Digital Governance. He is also a member of the Platform Learning Systems of the German government and the Dutch AI Coalition