#LegalTechPills
  • Propriedade Intelectual e Tecnologias da Informação
  • Tecnologia, Média & Telecomunicações

Generative AI: Beyond the Hype

It is undeniable that there is a kind of general euphoria surrounding generative AI. Justifiably so, as it will revolutionize the way we work, consume, interact, and entertain ourselves. Simply reading news or online discussions, regardless of whether they involve experts, gives the impression that systems like ChatGPT and Bard are being widely used, as if we were all directly dependent on them or on the verge of being so. The hype is spreading spontaneously, driven by a reckless fear of missing out, even leading to the emergence of new (or at least self-proclaimed) AI experts.

However, according to a recent article by Ryan Browne, this frenzy is expected to face a reality check in the coming year, which he referred to as a “cold shower.” Browne warns of a wave of promises and pitfalls in generative AI, citing the increasing and necessary costs of operating it. This, coupled with the continuous calls for regulation, adherence to audit procedures, privacy concerns, ethical issues, and copyright matters, will change the landscape of AI in the world.

Cezar Taurion, former IBM director in Brazil, echoes the idea that companies are beginning to realize that the excitement of using ChatGPT for everything is suitable for individual users or limited experiences with public versions of large language models (LLMs), which are known for their general language understanding and generation capabilities. However, when you shine a light on the subscription costs of a more robust office suite powered by artificial intelligence, such as thirty dollars per month per user, it becomes challenging to explain the ROI and secure approval from the CEO and the Board.

Furthermore, Taurion points out that the generative AI hype was driven purely by emotion, with unrealistic and overly ambitious expectations for ill-defined business problems. The technology was overhyped, and “doing something with ChatGPT” became a mantra. Most experiences remained at the basic level of “take it,” meaning using prompts to accomplish something. At most, it involved a simple connection via API with an LLM. Simple and straightforward, but lacking competitive differentiation. Everyone can do that.

Taking a deeper dive into specific business needs, one can observe the need for more complex models tailored to a company’s requirements, such as using techniques like Retrieval Augmented Generation (RAG). RAG is an architecture used to enhance the quality of responses generated by LLMs by incorporating external sources of knowledge to complement the internal information representation of the systems. Taurion explains that two fundamental components, which were somewhat overlooked during the euphoric phase, come into play: data and governance. Data is crucial for any AI project; as the saying goes, “No data, no ML (Machine Learning).” Governance deals with legal issues, which are currently the Achilles’ heel of everything related to AI: ethical concerns, transparency, copyright, hate speech, and so on. After all, “AI for Business” is different from “AI for Fun.”

Returning to Browne’s article, generative AI models like OpenAI’s ChatGPT, Google Bard, Anthropic’s Claude, and Synthesia, among others, require substantial computing power to run complex mathematical models that enable them to determine the responses to user requests. Companies need to acquire high-power chips for AI applications, typically advanced graphics processing units (GPUs) designed by the American semiconductor giant Nvidia. Now, more and more companies, including Amazon, Google, Alibaba, Meta, and supposedly OpenAI, are designing their own specific AI chips to run these AI programs. This movement naturally sparks a cycle of innovation, even in the medium term, for their development. This is another reason why a kind of winter is expected for generative AI topics in the next year; it will shift into a more back-end focus rather than front-end.

What impact will this have on the legal market? It is still premature to conclude that there will be a cooling-off period in the sector. We are currently in the classic hype curve, mainly with the advent of platforms like Harvey and CoCounsel from the American company Casetext. Expectations are high for the benefits generated by generative AI, especially in a field where the production of intellectual content, including legal theses, contract clause drafting, and various topic research, is at the core of the profession. Even if current tools are not yet highly accurate or lack reliability in their sources, they already provide significant assistance to legal professionals, offering at least introductory insights into specific topics or benchmarking contract clauses. Regardless of the current frenzy and the pessimistic discourse for the next year, their benefits are already noteworthy and, most importantly, inevitable. In simpler terms, whatever the current feedback on generative AI tools, even if it’s just a simple connection via API with an LLM, there has been progressive advancement in the legal market. The trend is towards further progress, with the sky not being the limit in this field.

As Taurion said, generative AI is a powerful tool if used correctly and has a high likelihood of impacting productivity and creating new operational models. However, we must be able to accurately assess the risks, benefits, and capabilities of emerging technologies and not accept the hype as a given fact.

Related Content