Generative AI is the generic term for the recently released commercial versions of software that use neural networks. This type of software is used to converse with human users in a way that seems as if the machine is a real person. Generative AI is a revolutionizing force in business. It is especially revolutionary for lawyers as it works to increase attorney efficiency – so long as it is used correctly.
Generative AI (“Gen-AI”) is defined by the tech community generally as: Machine learning processes that use text, images, video and audio to respond and generate content in a way that simulates the learning and decision-making processes of the human brain (see IBM’s definition of Gen-AI for more detail).
Popular Gen-AI systems have come to market over the past year or so. Mostly these have been released by major technology companies but also several new entrants. Some of the more popular systems are Microsoft’s partnership with OpenAI in releasing ChatGPT, Google’s Gemini, Anthropic’s Claude, Amazon’s Bedrock platform which runs the open-source Llama 3x and Mistral, as well as offerings from IBM, Salesforce and Oracle. Each of those offerings boast metrics and benchmarks that attempt to outdo the other. Ultimately, at this point, there is little difference between the systems. They all rely on the same fundamental neural-network and machine learning technologies. But each delivers differentiation through additional features.
Gen-AI has the potential for major shifts in the way businesses run and operate. Already many businesses have embarked on an AI strategy and are using the technology in some form in their business – whether to create business workflow efficiencies, to help increase knowledge for customer support teams or to reduce menial and manual human-based tasks.
Basic Phases of Gen-AI
Training
Generative AI systems must be trained.
The large public systems (such as ChatGPT) have scoured the Internet and ingested billions and billions of blobs of information. Those blobs of information are documents, webpages, audio, video and more. Those blobs of information are used to train the AI system, such as Large Language Models (or LLMs). This training creates a foundation model that serves as the basis of the AI applications. So for example, OpenAI, the company behind ChatGPT, ingested information from the public Internet. ChatGPT used that public information to train on so its knowledge grows. Similar to how companies train employees to learn skills and to be competent in a particular knowledge domain.
Tuning
Gen-AI systems must be tuned.
The system’s foundation model is tailored to a specific area or application once a sufficient amount of data has been ingested for it to be trained. In the case of the public LLMs, that tuning is generalized and focused on minimizing errors. For industry-specific artificial intelligence applications, such as LexMateria.ai (a Gen-AI system tuned for the legal industry), that tuning takes on more specificity. It is specially focused on the needs of that industry or knowledge area.
Generation
Generate content, evaluate, re-tune.
The final phase of Gen-AI requires content generation, evaluation of that generated content and a constant re-tuning, to assess the Gen-AI application’s output and to continually improve its quality and accuracy. This is why so many of the popular commercial Gen-AI platforms have, in the course of less than a year, released major new versions of their underlying technology. The phase of generating content and improving upon it is critical to making these systems smarter over time – thus reducing their generation of made-up facts and information (i.e., less hallucinations).
As public Gen-AI systems become smarter and more accurate, the applications built to leverage their core strengths will in turn also become better. At LexMateria.ai we are always working to improve our AI models and to deliver better, more accurate results based on our customer’s private data.
Gen-AI for Lawyers
For lawyers, Gen-AI can be very powerful, though fraught with risk.
Lawyers can use Gen-AI technology to draft initial versions of legal documents, or research case related issues. Attorneys can reduce the work required to find patterns in witness testimony and to ensure that all the pertinent facts of a case are identified.
But the risks are high – especially for lawyers who are involved in critical decision making every day. A lawyer cannot simply use a public Gen-AI tool as part of their practice. By using current popular commercial Gen-AI systems (as listed above) to perform legal work, lawyers risk generating false facts and case law. That is, stuff completely made-up by the machine.
These instances of made-up information are referred to in the AI technical field as hallucinations. Hallucinations can be very bad for attorneys. Using content generated by public AI systems can lead to sanctions and reputation loss for attorneys among other professional and business problems. See our article about the risk of hallucinations for lawyers when using ChatGPT and similar Gen-Ai systems as part of their law practice.
See our article for attorneys about the Pitfalls of Using ChatGPT for your cases.