#LegalTechPills
  • Tecnologia, Média & Telecomunicações

In search of good practices in generative artificial intelligence

By Helder Galvão on

artificial-intelligence

The advent of virtual assistants based on artificial intelligence, or chatbots, in their various models and formats, has provoked great and controversial debates. On one side enthusiasts, who see in these resources a new and inexhaustible frontier for knowledge and increase in the most diverse professional activities. On the other side, the more skeptical, who question the veracity of the sources and coherence of the contents created by the algorithms of these mechanisms. They also question a potential discriminatory bias and copyright infringement, as well as the impacts on the labour market.

However, the only certainty is that these virtual assistants, such as Chat-GPT, are inexorable, that is, and like the path of the waters, in the words of Kevin Kelly of the US magazine Wired, there is no way to stop it from advancing. But how to get the best out of humans and machines and recognize the inevitable complementarity of both and not their replacement? How to make an ethical and responsible use of them, so that, for example, prejudices, dissemination of false discourses and plagiarism are not tolerated?

There are numerous definitions of Artificial Intelligence (AI). According to Dora Kaufman, in a simplified perspective, we can think of AI as the reproduction of all the behaviours that the human brain controls. However, the mental process of Artificial Intelligence would surpass the human mental process. This is the example of a calculator. Obviously, we cannot compete with its capacity to carry out a complex calculation, the answer to which would demand a much longer time if carried out by a human, perhaps unsolved. In other words, the capacity of AI exceeds human limitations as observed in different situations of daily life with the use of appropriate technologies.

The concept of machine-related intelligence, similarly, also embraces a luxury of definitions. This is the case with the famous Turing Test, by the British mathematician and computer scientist Alan Turing. At the time, Turing proposed to find a definition that applied to both men and machines, so that machines would or would not be intelligent if they passed a test. Controversies aside, and more than half a century later, the only certainty is that we cannot dare to challenge them, because as Kaufman reminds us, two recent and correlated events have galvanised AI research: the explosion of a huge amount of data on the Internet (Big Data) and the Deep Learning[1] technique.

Hence, then, massive investments in research and development – two words that put any nation and companies ahead of others, by exploring these two aspects, has brought up conversational tools (chatbots) based on generative artificial intelligence (GenAI). Put simply: a type of intelligence that uses algorithms capable of generating content, such as text, images, videos, among others, whose quality is so high, so high, that it is difficult to distinguish it from the one created by a human.

The recent advent of the area, notably Chat-GPT, from the US company Open AI, has, however, opened a kind of Pandora’s Box. All the problems and clichés historically aimed at machines, as in the fear that they will finally replace humans, have become the hot topic again. But as also in Greek mythology, there is hope, namely that GenAI will enable an exponential advance of scientific knowledge in favour of humanity and move the human race forward, to paraphrase beatnik Jack Kerouac.

A guide, therefore, and in its best denotative sense, will seek to give direction to the debate, presenting ways to use GenAI ethically, responsibly and, above all, in compliance with the legal system. It is well known, however, and as with any other resource based on computational intelligence, that it will be up to the user to proceed correctly with their inputs, to the extent that GenAI, including the most popular of them, Chat-GPT, requires certain cautions and care.

Indeed, the quality of the text generated by the Chat-GPT is so high that it is difficult to distinguish it from what is written by a human, which has benefits and risks. In this sense, we highlight below five good practices so that the user and the collectivity can use and exploit them.

Before this, however, it is necessary to always adopt the following three pillars, forming an authentic “tripod”, which are: (i) fair and responsible use; (ii) ethical; and (iii) socially impactful, namely:

  • Fair use: within the concept of fair use, the use of GenAI must be responsible, so as not to cause damage to third parties, as in the case of disseminating hate speech, discrimination, reverberating slanderous or untrue facts. It is also warned that failure to comply with this practice may cause harmful effects, generating damages of similar extent for the offended and, of course, civil and criminal liability for the perpetrators and offenders
  • Ethical: The GenAI tools, which are still (and always will be) being improved (note that the Chat-GPT is still a beta version), sometimes present answers that initially seem plausible, but are incorrect, nonsensical and lack reliable sources. This behaviour is common to large language models and is known in computer science as “artificial intelligence hallucination”. Therefore, and as cliché as it may sound, conversational tools based on generative artificial intelligence are not an end in themselves, but a means, that is to say, they are aids to activities and not the final end. It will not replace the teacher, the lawyer, the doctor, the student, the advertiser, the contact between seller and client, but it will work as a skillful resource and as an increment for the execution of these tasks. Therefore, a scientific work or a medical diagnosis can never be conceived solely and exclusively by a GenAI, regardless of its model and degree of assertiveness. Human supervision is fundamental and indispensable, otherwise it could lead to cheating in the classroom or put a patient’s health at risk.
  • Social impact: GenAI shall serve society. To promote progress, broaden access to knowledge and provide solutions to a wide variety of queries, from the simplest to the most complex. Any use other than this premise is questionable and demeans its existence.

 

i. Right Prompt

Using the Prompt (“questions”) correctly is essential. To paraphrase Erik Nybo, a pioneering lawyer in lawtech, a poorly designed or superficial prompt will generate an inadequate or worse-performing response, thereby (i) disappointing the user; (ii) of no practical use; and (iii) resulting in a waste of time.

Introducing correct prompts will differentiate who knows how to work with artificial intelligence and who doesn’t. Recently, even, a renowned British law firm was recruiting professionals under the title of GPT Legal Prompt Engineer.  Here, then, are some tips:

– Concise: keep your question as few words as possible.

– Be specific and provide context: use specific details, examples and dates to make your prompt easier to understand and answer.

– Proper grammar and spelling: correct language makes your prompt easier to understand.

– Avoid asking multiple questions in a single prompt: ask only one question to get a clear and concise answer.

 

ii. Supervision and checking

– Double Check: The contents generated by chat applications based on generative artificial intelligence, as in the case of Chat-GPT, are structured in a complex system of rewards. This model, for computer science, can be over-optimized and thus harm its performance.

Therefore, regardless of the result obtained, it is essential to verify the information, in a kind of revision, a double check. In other words, GenAI is not infallible and may issue, for example, an erroneous historical fact.

-Checking: If the recipient of certain content, such as teachers, raises doubts about its origin, in other words, whether it was created by students or only by GenAI, it is possible to check its veracity through software and checking mechanisms.

– Update: As with any computer system, there are time limitations, in other words, GenAI applications may not mention more recent historical facts. Therefore, no expectations should be raised in this regard.

– Algorithmic Bias: Models such as Chat-GPT receive training at various stages, as shown above.  Based on certain prompts, which can be manipulated, there may be a distortion or irrational bias on a certain topic, including sensitive ones, such as hate speech directed at an ethnicity, prejudice towards a social class or minorities, and potential offenses to individuals, especially those publicly exposed, such as politicians and celebrities. Therefore, it is necessary to make a responsible use, avoiding speculations and prompts of a character that exceeds the intimacy, privacy and the right to forget.

 

iii. Antiplagiarism

– Sources: The lack of indication of primary sources in the production of texts, images and content in general or the use of paraphrases are indeed the Achilles Heel of these tools. However, and as said, applications such as Chat-GPT function as a vehicle for their users to achieve certain purposes. They should never be seen as an end in themselves. Therefore, the contents generated, without distinction, should pass through filters of veracity, bibliographic rigour, use of the resource of “inverted commas” when referring to other pre-existing texts and the creative transformation (interference of the human spirit) by its users. In other words: it is not recommended to accept as entirely faithful the contents generated by applied GenAI, because as the philosopher Noam Chomsky says, the Chat-GPT is a form of hi-tech plagiarism.

– Novelty and Originality: Also with regard to creative transformation, the creative interference of the human spirit in the contents produced by GenAI is highly recommended. As known, novelty and originality are essential requisites for the recognition of copyright. If a text, image, sound or content in general created by GenAI does not undergo this intervention, even with a minimal contribution from the user, the risk of plagiarism is high, i.e., the undue use of third-party works in those texts, images, sounds or content in general generated by the application without proper citation of authorship. After all, remember: the Chat-GPT, for example, may only create something new. Originality, which holds subjective interpretations, depending on the user’s perspective, may be questioned, to the extent that its technology tends to be circular, that is, manipulating only the information already existing in big data.

 

iv. Accountability

– Transparency: Regardless of the environment (classrooms, law firms, consultancies, advertising agencies, literary market, among others) it is always prudent to clarify in advance, the disclaimer, that certain content was co-produced through GenAI-based platforms or resources. This conduct is a demonstration of the best collaborative model that GenAI provides, that is, its essence. Not to mention respecting the corporate mantra of aligning expectations. After all, transparency, information provision and good faith are basic pillars for any personal and professional relationship.

 

v. Continuous improvement

– Virtuous and vicious cycle: Any GenAI-based model has a virtuous cycle. Or, unfortunately, vicious, depending on human interference. In the case of Chat-GPT, the model adopts efforts to reject inappropriate requests, although it occasionally responds to malicious prompts and behaves in a biased manner. It is therefore the user’s duty to report and warn of errors, harmful and dangerous content through the moderation and filter links available and thus promote continuous improvement.

 

***

 

No, generative artificial intelligence will not replace professionals in their various activities, as in the example (but not limited to) those described above. However, this professional should certainly know how to explore and use them in favor of his activities, precisely so as not to be replaced by those who already dominate the best techniques and resources. And, increasingly, a professional with an analytical behaviour will be required, delegating to GenAI the role of performing tasks that can be automated.

Episodes in our history, such as the advent of the piano accordion, the video cassette and even internet banking have proved that resisting technological progress is swimming against the current. Remember: GenAI is not able to reflect, feel emotions, have empathy and form original reasoning. Nor does it have social understanding, cognitive flexibility, improvisation and creativity for unusual and atypical situations, as well as manual skills and ethical and moral decision-making. Such behaviours are innate to the human spirit, and it is always and exclusively up to you, the reader.

This guide, therefore, points out ways in which we can explore virtual assistants based on artificial intelligence, as in the example of the most famous among them so far, the Chat-GPT, of the North American company Open AI, in a positive and propositional way. That is, what are the areas and activities that we can explore in a perspective of progress, especially the legal market, as well as adopt certain criteria of good practices, in order to make a fair, ethical and social impact use of these tools. After all, as stated by the British jurist Richard Susskind, “Chat-GPT is the most remarkable system I have seen in over 40 years working with AI”.

 

[1] Still Kaufman, Machine Learning is a sub-area of AI. The technique does not teach machines how to, for example, play a game, but teaches them how to learn to play a game. The process is distinct from traditional “programming”. This “subtle” priori difference is one of the foundations of the recent advance of Artificial Intelligence. All elements of the online movement – databases, tracking, cookies, search, storage, links etc. – act as AI “teachers”. The friendliest term today is Deep Learning.

Related Content