LEXIST

Legal Status of Artificial Intelligence Beyond Individual and Legal Entities

Print Friendly, PDF & Email

I. Legal Nature of Artificial Intelligence

The legal status and definition of artificial intelligence (AI), which is gradually entering our
lives and is expected to expand its usage, are currently subject to debate. As a result, the
responsibility regime for AI has not been clearly defined yet. The European Parliament is
actively working on an AI law to address this deficiency.

One prominent proposal regarding the personality of AI comes from the European Parliament,
suggesting the concept of electronic personality. Due to the autonomous nature of AI
structures, it is believed that AI may possess a personality in the future, resembling that of a
human but distinct from a natural person. In this regard, the European Parliament, in its
proposal report, argues that in the long term, AI cannot be considered as mere property, as it
will exhibit features similar to having a personality, such as assuming rights and obligations 1 .

Another proposal regarding the personality of AI is the concept of legal personality. This
perspective suggests that AI, possessing a personality distinct from natural persons, should be
evaluated within the framework of legal entities. Accordingly, under this approach, AI should
be considered within the scope of legal entities 2 .

While the prevailing view in legal doctrine supports granting sui generis personality to AI,
there are dissenting opinions. These perspectives often argue that AI is machine produced to
facilitate daily life, akin to a tool, and will never be able to become human-like in the future.
Therefore, according to this view, attributing personality to AI is unnecessary 3 .

II. Liability of Artificial Intelligence for Harm

1. AI as a Product

If AI is considered devoid of any personality, i.e., evaluated as a service or product, direct
liability for AI may not be applicable. As recommended by the European Parliament, current
legal regulations provide for the responsibility of the producer, operator, owner, or user of AI,
rather than the AI itself 4 . Legal scholars also propose that AI's liability might only arise when
it becomes fully autonomous and capable of understanding its own actions 5 .

Within the framework of perceiving AI as a product, in the context of Turkish law, the
Product Safety and Technical Regulations Law No. 7223, under the heading “Product
Liability Compensation” in Article 6, paragraph 1, states:

“(1) The manufacturer or importer of the product is obliged to remedy the damage caused
to a person or property by the product…”

Accordingly, if a product is defective and causes harm, the responsibility lies with the
manufacturer or importer under this regulation. Thus, under the relevant regulation, the
producer can be held liable based on contractual and tort liability according to the relevant
provisions of the Turkish Code of Obligations. The current global approach to AI liability in
incidents reflects a similar perspective, as seen in recent developments in the United States
and China, where the responsibility in accidents involving autonomous vehicles shifts more
towards the manufacturer than the driver 6 .

Considering a viewpoint suggesting applying strict liability for damages caused by AI, even if
the producer is not at fault, could be beneficial. For instance, under this regime, the
manufacturer could be held responsible based on principles of equity (Article 65 of the
Turkish Code of Obligations) or strict liability (Article 70 of the Turkish Code of Obligations)

by analogy. If this liability regime is accepted, the injured party would only need to prove
their damage and the causal link between the damage and the AI system.

Setting aside the strict liability regime, damages caused by AI can also be compensated under
the basic liability regime through the provisions of tort law. In this case, the manufacturer's
liability arises if the injured party can prove the conditions of tort.

2. AI as an Individual Entity

According to the evaluation made in the European Parliament's recommendation report, the
current regulations do not seem to allow holding AI accountable for its actions. However,
with the possibility of future robots learning from their own experiences and interacting
uniquely with their environment, the complexity of sharing liability for damage caused by
autonomous robots has been acknowledged. As a result, it has been emphasized in the
Parliament that existing norms are inadequate, and new rules compatible with technological
developments are needed.

In line with the mentioned needs, the European Parliament aims to establish a legal status
called electronic personality for AI, creating a framework for the liability regime of AI.
Within this framework, a system of insurance covering all potential liabilities of robots,
similar to traffic insurance for vehicles, could be established to address the complexity of
sharing responsibility for damage caused by autonomous robots. In the long term,
sophisticated autonomous robots might utilize an electronic personality status, making them
responsible for the damages caused by their autonomous decisions.

For More Information

Related Insights