by Ana Landeta, Director of the R+D+i Inst. at UDIMA and Felipe Debasa, Director of the ONSSTKT21stC at URJC
▪
Without a doubt and under the European Union policy context, “Artificial Intelligence (AI) has become an area of strategic importance and a key driver of economic development. It can bring solutions to many societal challenges from treating diseases to minimising the environmental impact of farming. However, socio-economic, legal and ethical impacts have to be carefully addressed”[i].
Accordingly, organizations are starting to make moves that act as building blocks for imminent change and transformation. With that in mind, Traci Gusher-Thomas[ii] has identified four trends that demonstrate how machine-learning is starting to bring real value to the workplace. It is stated that each of following four areas provides value to an organisation seeking to move forward with machine-learning and adds incremental value that can scale-up to be truly transformational.
- The rise of the virtual assistant will transform the workplace; By 2020 an estimated 80 percent of business-to-customer conversations will be conducted by machines. That will have enormous implications for all organisations both in terms of business processes and also future staffing needs.
- Machine learning will combat rogue behavior; Processes where companies are monitoring enormous amounts of data for specific trends and patterns to identify anomalies or rogue behavior are ripe for the application of machine learning.
- Businesses will gain new visibility into unstructured data; Machine-learning offers organisations the chance to gain deep insight into large volumes of unstructured data, automate and accelerate existing business analysis as well as streamline and bring greater consistency to customer interaction. One area where we are seeing this heavily is in the contract lifecycle management.
- Converting unstructured data will transform regulatory compliance; Increased regulatory compliance is an area where machine-learning and natural language processing/understanding are beginning to radically change the data landscape (especially in the financial services and life sciences sectors).
Regarding the evolving legal landscape and what businesses should do to comply with Artificial Intelligence scenario, Chris Holder and Vikram Khurana’s article “Artificial Intelligence: the evolving legal landscape – and what businesses need to do to comply”[iii] points out that AI will meet the full extent of its potential depending on – at least partially – the suitability of the legal framework that underpins it.
The reasons for this are clear: a robust legal regime is essential to foster growth, attract investment, enable effective risk management, and help ensure public take-up of AI technology. Equally, imposing an unsuitable or overly onerous legal regime to govern AI may have the opposite effect, discouraging growth and stifling innovation.
Broadly speaking, in their view, the key areas of law that businesses will need to consider when deploying AI are as follows:
- Legal liability – it is clear that AI has the potential to expose businesses to risk, loss and liability. What is the legal status of an AI system that operates largely independently of its designers and ‘machine learns’ over time? And who (or what) is responsible when AI goes bad? Is there a timeframe within which the manufacturer remains liable but, as machines learn to make their own decisions, does the risk leave the manufacturer and move elsewhere?
- Data privacy and cybersecurity – many AI systems will depend on large datasets,some of which are likely to include personal data. How will existing and upcoming data protection laws (including the GDPR) apply to AI and how can businesses design in compliance when using AI? Those AI systems are likely to be connected to networks and possibly each other – how can businesses mitigate cybersecurity risk in this interconnected environment?
- Employment law – much is being written about the potential for AI to displace human capital and the social implications of this. What are the HR issues that businesses need to consider when AI takes over tasks, processes and perhaps entire roles previously fulfilled by workers?
- Intellectual property – it is clear that IP will usually exist in AI, but IP may also be created by AI. Who owns information, knowhow and inventions developed by an autonomous AI, and what protections are available to would-be rights-holders?
- Contracting for AI – the AI service provider sector is burgeoning, with many new and existing providers selling AI-driven software and tools that disrupt traditional technology services markets, including data analytics and robotic process automation. What are the contractual issues that users will need to consider when procuring AI systems and services in the market? How do providers create contracts that reflect how AI operates in practice?
- Regulatory specifics – AI will be used across all industry sectors. To that end, what are the specific issues that arise when AI in healthcare, for example, and how will they differ from the situation where AI is used elsewhere, for example in transport and logistics? How will the relevant regulatory authorities deal with the issues raised?
Thus, and taking into account all of the above mentioned, it is clear that any serious assessment of AI deployment will require a thorough understanding of the legal risk being assumed, a risk management strategy to help ensure compliance, and a clear eye on the trends and developments that will shape law and regulation in this area.
Therefore, to examine how specific applications of AI will requires specific changes and adaptations to specific laws, and how businesses can engage in the legal and compliance issues.
Accordingly, the European Union approach to Artificial Intelligence puts forward a tactic linking AI and robotics. It deals with technological, ethical, legal and socio-economic aspects to boost EU’s research and industrial capacity and to put AI at the service of European citizens and economy.
In its Communication on AI[iv], the European Commission builds an European approach based on three pillars:
(i) Being ahead of technological developments and encouraging uptake by the public and private sectors
The Commission is increasing its annual investments in AI by 70% under the research and innovation programme Horizon 2020. It will reach EUR 1.5 billion for the period 2018-2020. It will:
- connect and strengthen AI research centres across Europe;
- support the development of an “AI-on-demand platform[v]” that will provide access to relevant AI resources in the EU for all users;
- support the development of AI applications in key sectors.
However, this represents only a small part of all the investments from the Member States and the private sector. This is the glue linking the individual efforts, to make together a solid investment, with an expected impact much greater than the sum of its parts.
Given the strategic importance of the topic and the support shown by the European countries signing the declaration of cooperation[vi] at the digital day, we can hope that Member States and the private sector will make similar efforts.
The High Level Expert Group on Artificial Intelligence[vii] (AI HLEG) will put forward policy and investment recommendations on how to strengthen Europe’s competitiveness in AI in May 2019.
Joining forces at European level, the goal is to reach all together, more than EUR 20 billion per year over the next decade.
(ii) Prepare for socio-economic changes brought about by AI
To support the efforts of the Member States which are responsible for labour and education policies, the Commission will:
- support business-education partnerships to attract and keep more AI talent in Europe;
- set up dedicated training and retraining schemes for professionals;
- foresee changes in the labour market and skills mismatch;
- support digital skills and competences in science, technology, engineering, mathematics (STEM), entrepreneurship and creativity;
- encourage Members States to modernise their education and training systems.
(iii) Ensure an appropriate ethical and legal framework
Some AI applications may raise new ethical and legal questions, related to liability or fairness of decision-making. The General Data Protection Regulation (GDPR) is a major step for building trust and the Commission wants to move a step forward on ensuring legal clarity in AI-based applications. In 2019 the Commission will develop and make available:
- AI ethics guidelines (AI HLEG Draft Ethics Guidelines)[viii];
- Guidance on the interpretation of the Product Liability directive.
In this sense, three major points stood out: exponential industry growth, increased accessibility to AI technology and adoption of AI as the way of today’s business.
For that purpose, the full development of Digital Single Market Strategy will be
crucial in order to raise the adaptation of current legal regimes, to define new regulations with the aim of providing an appropriate legal platform for AI technology to thrive, and, finally, to deploy AI factor in the legal changes that will inevitable come.
[i] https://ec.europa.eu/digital-single-market/en/artificial-intelligence.
[ii] Four trends shaping artificial intelligence in business. Traci Gusher-Thomas KPMG.- Available from: https://info.kpmg.us/news-perspectives/technology-innovation/trends-shaping-artificial-intelligence-in-business.html.
[iii] Artificial Intelligence: the evolving legal landscape – and what businesses need to do to comply. Available from: https://www.bristowscookiejar.com/trends/artificial-intelligence-the-evolving-legal-landscape-and-what-businesses-need-to-do-to-comply.
[iv] https://ec.europa.eu/digital-single-market/en/news/communication-artificial-intelligence-europe
[v] https://ec.europa.eu/digital-single-market/en/news/european-artificial-intelligence-demand-platform-information-day-and-brokerage-event.
[vi] https://ec.europa.eu/digital-single-market/en/news/eu-member-states-sign-cooperate-artificial-intelligence.
[vii] https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence.
[viii] https://ec.europa.eu/digital-single-market/en/news/draft-ethics-guidelines-trustworthy-ai.
Pictures credits: Rede… by geralt.