I. INTRODUCTION

Undoubtedly, one of the artificial intelligence models that have left its mark on the last period is GPT-3, in other words, Generative Pre-trained Transformer, Productive Pre-Processed Transformer 3 model in Turkish.

GPT-3 was developed by OpenAI which is called an artificial intelligence R&D company that includes computer experts and investors such as Elon Musk, CEO of companies such as SpaceX Tesla, Sam Altman, known for her initiatives Loopt, Y Combinator, and Ilya Sutskever, one of the inventors of software and networks such as AlexNet, AlphaGo, TensorFlow, carries out projects and R & D studies in many groundbreaking areas, especially artificial intelligence. GPT-3 is defined as an autoregression language model that uses the deep learning method to produce content similar to texts and graphics are written and created by humans.

It is stated that the system that processes data with "1.5" billion parameters in its previous version, GPT-2, will perform analysis with 175 billion parameters in GPT-3, so it can produce very advanced content. 1 It seems that the GPT-3 can answer a question that can be directed to itself, in a very academic language, as if a human or an academic had written it. However, it is also stated that artificial intelligence that can produce such high quality and qualified content has many risks and can cause many problems.

This article includes the risks posed by the GPT-3 artificial intelligence model, the problems it may cause, and their legal examination.

II. THE RISKS THAT GPT-3 MAY CAUSE IN LEGAL TERMS AND THE LEGAL EVALUATION OF THESE RISKS

Due to the fact that GPT-3 is a software language model, it needs to be evaluated its risks concerning subjects such as language, expression, spelling, getting news, reporting. In this context, for example, as will be stated below, issues such as personal rights, the offense of libel in criminal law, copyright protection in the field of intellectual property shall constitute the basis of the main risks that GPT-3 create.

Since GPT-3 is an artificial intelligence that can be used in coding, programming, graphic design using CSS, JavaScript, and Python software, it should be noted that there is a risk of rapid algorithm and bot production, scam (fraud) plans, and phishing occurring within the scope of the use of GPT-3.

Therefore, legal disputes that may be caused by GPT-3 in many areas such as fraud and cyber crimes (illegally access to a data processing system, defacement, block and disposal to the data processing system, etc.) may occur.

Basically, GPT-3 which produces content as a result of data scanning with artificial intelligence among the data provided by humans cannot always use the logical and intuitive selection abilities of humans. For example, among the data provided to GPT-3 for coding; in a situation where the user expresses that he has done the verb to eat but does not specify what he eats ("I am eating ..."), the GPT-3 will be able to scan the data provided the most by humans, selects the apple object, and produce a content to illustrate "I am eating an apple".

As it is seen, the contents produced by GPT-3 are not a direct reflection of logical and intuitive human thought but take shape according to the data provided by humans to the database used by GPT-3. This situation, on the other hand, is perfect enough to give the impression that the content was created by a human being in terms of its appearance, but it can cause dangerous consequences that can lead to manipulation and disinformation in terms of accuracy and consistency of the content.

It is evaluated that the risks of GPT-3 may violation personal rights and the right to obtain information. Although it differs from language to language, from culture to culture, it seems probable that many expressions may cause discrimination, such as the presence of attributes that are sexually humiliating, harmful, or insulting to women in a content created by GPT-3 since there is so much content across the internet that identifies women as sexual objects.

In this case, in addition to the appearance of personality rights in civil law, it will be possible to commit acts that correspond to the offense of libel in the criminal law, and a person or a group of people whose honor and dignity has been damaged, although there is an offender who can no longer be identified.

For contents formed by GPT-3 decoding method, GPT-3 enables the creation of texts or visuals by scanning numerous amounts of data on the online database. As such, one may ask in the first place whether contents created by GPT-3 would infringe third parties' intellectual property rights, as the said contents are produced via data that are provided by people. In this respect, problems may arise whether part of the produced contents would violate copyrights. For instance, in an academic and literary text, it is most often seen that the created content (e.g. thesis, article, etc.) is likely to be checked by software such as Turnitin or Urkund, as such it is ascertained whether the said content has plagiarized from other contents. Generally, academic or literary texts most often involve paraphrased sentences the initial version of which originate from the quoted source texts, and/or explicit references are made to such texts, whereupon a new unique text will have been formed. Although it rests with disseminating utilization of the artificial intelligence, to determine whether references have been made to the quoted original texts, or whether texts have been plagiarized; it should be noted beforehand that theoretically such risks on intellectual property may exist.

Having said that, in consideration of the fact that GPT-3 would be likely to be exploited in many prejudicial acts insomuch as it enables a wide range of fraud, from governments' security software and databases to persons and companies' bank accounts, phishing and capturing data in an IT system; it becomes obvious that the IT universe, governments, companies, and real persons are exposed to great risks for their property, security and data privacy. For instance, webpages that are used for online shopping or online payment for electricity, water, natural gas, etc. payments may be counterfeited as is the case with phishing, for an easier commitment of fraud.

In light of the foregoing, one may articulate that GPT-3 would give rise to unidentified crimes; and would lead to legal debates where crimes and/or tortious acts might be deemed to affect individuals or the society, or a part thereof.

Indeed, evidence highly supporting this hypothesis points out Liam Porr, who went on the top of the list for hackers on Hacker News site, managed to fool thousands of people, creating a fake blog page through GPT-32. Although Liam Porr's acts were based on a social experiment study, most considered that the contents on Adolos blog were human-made.

As will be inferred from this fact, we witness that the distinction between human and artificial intelligence has already begun to abolish. While this is the case, the determination, in a legal context, of the person who has caused the text to be produced through GPT-3, and of whether this person is the genuine owner of the text should be regarded as a facilitating factor in finding liabilities for the legal risks which have been already mentioned.

In this respect, for instance, who will be responsible for the production of a racist, insulting, humiliating text, which was in essence not intended by a person having GPT-3 created such content? By all means, the liability of the person who has made the text through GPT-3 may always be at stake, if there was an opportunity to examine, review and modify the content before any infringement has been caused. Such situations emerge as the natural outcome of the due care in civil law and objective attribution in criminal law burdened on persons.

Likewise, for actions leading to infringements, technical analysis may better ascertain whether the exploitation of GPT-3 would, in itself, be evaluated under the "fault" element in civil law, and under the "moral element of a crime" constituting "deliberate/negligent action".

In summary, although the stunning digital architecture of GPT-3 seems to result in explosive debates in the legal field, we will be able to see in the upcoming term what it would entail.

III. CONCLUSIONS

In the face of gradually and swiftly developing technology every other day, there is no doubt that the daily routines and requirements are being reshaped. Therefore, legislators enact new regulations for the sake of accommodating to progressive requirements. As for GPT-3, it is, for now, an artificial intelligent instrument which we know what it is capable of doing, and for which we need to wait to see what kind of problems it may lead to in practice.

This human-made special brain gadget, built on 175 billion parameters, will undoubtedly take us to scenarios much beyond what has been examined and evaluated. For analysis of all these scenarios which bear legal risks, of the delineation for liabilities and the sanctions, we believe that legislators should follow developments closely, and a framework for liability and sanctions criteria convenient to such artificial intelligence infrastructure should be determined.

Footnotes

1. https://www.bbc.com/turkce/haberler-dunya-53692902

2. https://webrazzi.com/2020/08/17/gpt3-blog-hacker-news/

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.