What are the Emerging legal issues in an AI-driven world?

By Ayush Chandra 13 Minutes Read

Introduction

Artificial Intelligence (“AI”) is the most advanced marketplace reality. The rise in computing capability, enhanced algorithms and the availability of large quantities of data are modifying nations. According to the International Data Corporation (“IDC”), the AI market is foreseen to hit $35.8 billion this year, which signifies an increase of 44% considering 2018[1]. IDC has also calculated global spending on AI to multiply by 2022, giving $79.2 billion.

In this article, we know a whole of rising legal issues connected with the use of AI and allow some ways on how the law might respond.

What is artificial intelligence?

AI represents the range of a computer to complete the assignments generally connected with human beings[2]. It involves the ability to analyze, discover purpose, conclude, read from past practice and find models and connections to react dynamically to improving conditions.
In 2017, Accenture Research and Frontier Economics attended research analyzing the economic increase rates of 16 industries and projecting the influence of AI on global economic growth. The report reasoned that AI has the potential to expand profitability an aggregate of 38% by 2035 and reach an economic advance of US$14 trillion across 16 industries in 12 economies by 2035[3].
The commitment of AI is better choice-making and improved practices. In their book Machine, Platform, Crowd, MIT professors Andrew McAfee and Erik Brynjolfsson write “the proof is amazing that whenever the opportunity is open, relying on data and algorithms alone normally begins to improve choices and estimates rather than relying on the idea of even knowing and “expert” humans.”[4] The fear is that AI in an uncontrolled situation will point to a loss of human supervisory authority and poor outcomes.

The legal aspects of AI

Authors have seen that the power of AI will grow new and significant legal and moral issues. Some have identified the need for AI ethicists to develop bad effects where this technological progression might take us[5]. In October 2016, the British House of Commons issued a report on Robotics and Artificial Intelligence, which highlighted several upright and legal issues including direct decision-making, minimizing prejudice, privacy, and responsibility[6]. On December 18, 2018, the European Commission’s High-Level Expert Group on Artificial Intelligence (“AI HLEG”) published the first draft of the Draft Ethics Guidelines for Trustworthy AI[7]. Following to the guidelines, Trustworthy AI needs a moral direction and technological robustness:
Ethical design: Its expansion, deployment, and use should consider significant rights and appropriate management, as well as essential policies and consequences, securing “an ethical purpose”, and Technical robustness: It should be technically healthy and secure given that, even with good purposes, the use of AI can create accidental wrong.

Privacy

The amount and comparability of data gathering will keep privacy at the lead as one of the several important legal issues that AI users will suffer from proceeding ahead. AI methods manage large volumes of data; hence, as more information is used more issues are supported.
Governments are now modernizing their privacy law to return to privacy interests fired by the public outcry upon large data gaps and the unfettered use of data by large corporations. Consumers have grown frequently involved with the possible ill-usage of their data. In 2015, the European Commission carried a survey carried out in 28 member states of the European Union that showed that approximately seven out of 10 people exposed concern about their knowledge being used for another purpose than the one for which it was obtained.

The EU and international regulators have taken an active interest in AI, not only recognizing its advantages but also being careful of possible risks and unintended results.[8] The European Parliament passed the General Data Protection Regulation (“GDPR”), which is a complete set of rules intended to keep the private data of all EU citizens received by any party safe from unlawful use or practice. Under the GDPR, organizations must be clear and brief about their gathering and use of private data and show why the data is being received and whether it will be practiced to create profiles of people’s actions and attitudes. In other words, organizations must be clear about the type of data they gather about customers and how this knowledge will be used. Critics dispute that the GDPR could give a barrier to developers attending to design more complicated and complex algorithms. Unlike the EU, US federal administrators have yet to set laws to govern the use of personal information in the AI world. Sensing the inevitability of data management, some large American corporations like Apple are promoting the introduction of the law in the United States.[9] On January 18, 2019, Accenture published a report describing a structure to assist US federal agencies to assess, expand, and monitor AI systems.

Contracts

The original nature of AI may need people or entities engaging for AI services to attempt out special contractual protections. In the past, the software would really function as it was agreed upon. Computer training though is not inactive but is continually growing. As noted by McAfee and Brynjolfsson, “machine training systems get more useful as they get more expecting, operate on more active and more techno scientific tools, gain access to more data, and include enhanced algorithms.” The more data algorithms use, the better they grow at recognizing originals. Parties might think contractual terms, which agree that the technology will function as intended and that if rejected issues result then contractual improvements will follow. These added requirements strengths add importance on audit rights with regard to algorithms within AI contracts, proper service levels in the record, a decision of the ownership of developments created by AI, and indemnity provisions in the case of malfunction. AI will dictate a more creative approach to contracts where drafters will be required to predict where machine learning might lead.

Torts

Machine knowledge continually grows, making more complicated choices based on the data it works on. While most outcomes are expected, there is a clear opportunity of an unanticipated or unfavorable outcome given the inadequacy of human supervision. The mechanical and artificial nature of AI recommends new ideas throughout the purpose of the account. Tort law has traditionally been the tool used in the law to discuss changes in society, including technological advancements. In the past, the bars have joined the authorized analytical structure of tort law and have implemented those legal principles to the facts as they are given before the court. The most common tort—being the tort of negligence—focuses on whether a person has a responsibility of care to another, whether the party has breached the type of care, and whether losses have been caused by that breach. Reasonable foreseeability is a fundamental idea in negligence. Clearly, the test is whether a rational person is able to foretell or expect the general results that would occur because of his or her conduct, without the advantage of hindsight. The further that AI systems move away from traditional algorithms and coding, then they can perform actions that are not just unforeseen by their producers but are completely unforeseeable. When there is a lack of foreseeability, are we set in a position where no one is responsible for a conclusion, which may have a damaging influence on others? One would assume that our governments would acknowledge preventing such a result. In a situation where there is a lack of foreseeability, the law force replaces its report based on neglect to one based on strict liability. The doctrine of strict liability also known as the rule in Rylands v Fletcher [10]gives that a defendant will however be held legally liable when neither an intentional nor a neglectful act has been observed and it is only confirmed that the defendant’s actions occurred in injury to the plaintiff.


[1] International Data Corporation, “Worldwide Spending on Artificial Intelligence Systems Will Grow to Nearly $35.8 Billion in 2019, According to New IDC Spending Guide” (11 March 2019), online
[2] B.J. Copeland, “Artificial intelligence” (17 August 2018), Encyclopedia Britannica, online
[3] Mark Purdy & Paul Daugherty, “How AI boosts Industry Profits and Innovation” (2017), Accenture, online Source Link.
[4] Andrew McAfee & Erik Brynjolfsson, Machine, Platform, Crowd: Harnessing Our Digital Future (New York: W.W. Norton & Company, 2017)
[5] John Murawski, “Need for AI Ethicists Becomes Clearer as Companies Admit Tech’s Flaws” (1 March 2019), the Wall Street Journal, online Source Link.
[6] House of Commons Science and Technology Committee, “Robotics and artificial intelligence” (12 October 2016), online Source Link.
[7] European Commission, “Have your say: European expert group seeks feedback on draft ethics guidelines for trustworthy artificial intelligence” (18 December 2018), online Source Link.
[8] Deloitte, “AI and risk management” (2018),
[9] Mike Allen and Ina Fried, “Apple CEO Tim Cook calls new regulations “inevitable”‘ (18 November 2018)
[10] UKHL 1, (1868) LR 3 HL 330

Related Posts