Is the Use of Generative AI in Litigation More Artificial Than Intelligence?

 
 
 

Legal scholars and computer programmers have been predicting the effect of generative artificial intelligence (A.I.) in the legal system for years. Though only recently has the issue seemed to highlight just about every legal publication, periodical, blog, and commentary dealing with the practice of law. Part of this is because the technology once only conceived of as science fiction has now become a reality. A.I. has tremendous potential to provide broad efficiency and accessibility in many areas of the law and in law practice. But another reason for the sudden interest in discussing generative A.I. and its effect on the practice of law is its potential misuses. 

How Not to Use A.I. in Litigation

It was bound to happen eventually. With the availability of generative A.I. website tools, it was only a matter of time before a lawyer got caught taking short cuts in conducting legal research by allowing A.I. to submit false legal authority to the court. 

In a 2023 Federal Action out of New York, when a defendant moved to dismiss the case, plaintiff’s attorneys filed a responsive pleading that referenced “bogus” judicial opinions, fake quotes, and contrived legal citations, all derived from the A.I. research website ChatGPT. Naturally, the court quickly determined that the referenced cases were non-existent. To make matters worse, when the court questioned the authenticity of the cited opinions and ordered the attorneys to produce them, they “doubled down” on his blunder by annexing to the affidavit excerpts from false court opinions to support his purported research. The court noted that the substance of the opinions produced from the A.I. tool consisted of legal gibberish that was difficult to follow and border[ed] on nonsensical. At a sanction hearing, the attorneys finally fell on his sword and apologized profusely for the blunder. The attorneys told the court that they only relied on the A.I. tool as a last resort to supplement the other research they performed. The problem was, they did no other research. As a result, the court issued sanctions against the lawyers and the law firm for its partners’ unprecedented actions and imposed a $5,000.00 penalty.

Is the Use of A.I. in Litigation Unethical?

So, what is the lesson to be learned here? Is it unethical to use generative A.I. to do legal research or to compile and assess data necessary for litigation?

No. Of course not. After all, well known and accepted platforms such as Westlaw and Lexis use A.I. to perform legal searches. And it is expected that as technology advances for tool will be available to lawyers for the efficient practice of law. However, existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings in court. 

Federal Rule of Civil Procedure 11 (Signing Pleadings, Motions, and Other Papers; Representations to the Court; Sanctions)

Fed. R. Civ. P. 11(b)(2) provides that when making representations to the court,

[b]y presenting to the court a pleading, written motion, or other paper--whether by signing, filing, submitting, or later advocating it--an attorney or unrepresented party certifies that to the best of the person's knowledge, information, and belief, formed after an inquiry reasonable under the circumstances: . . . 

(2) the claims, defenses, and other legal contentions are warranted by existing law or by a nonfrivolous argument for extending, modifying, or reversing existing law or for establishing new law . . . .

Rule 11 provides that a violation of the rule may result in sanctions (Fed. R. Civ. P. 11(c)(1)) but that an attorney has 21 days to correct the claim before any motion for sanctions is presented to the court (Fed. R. Civ. P. (c)(2)). The court, itself, may “order an attorney, law firm, or party to show cause why conduct specifically described in the order has not violated Rule 11(b).” (Fed. R. Civ. P. 11(c)(3)).

Florida Rule of Civil Procedure 1.150 (Sham Pleadings)

Florida Rule of Civil Procedure 1.150 may be read to address making frivolous legal arguments.

Rule 1.150 allows a court to strike any pleading or part thereof that it determines to be a sham. This gatekeeping rule governs the procedural aspects of the issue. More applicable to the bad faith conduct of the attorneys may be the Florida Rules of Professional Conduct.

Florida Rules of Professional Conduct

Rule 4-3.1 – Meritorious Claims and Contentions.

Florida Rule of Professional Conduct 4-3.1 addresses frivolous claim made in bad faith. It provides that “a lawyer shall not bring or defend a proceeding, or assert or controvert an issue therein, unless there is a basis in law and fact for doing so that is not frivolous, which includes a good faith argument for an extension, modification, or reversal of existing law.” The comment to Rule 4-3.1 further provides that

[t]he filing of an action or defense or similar action taken for a client is not frivolous merely because the facts have not first been fully substantiated . . . . What is required of lawyers, however, is that they inform themselves about the facts of their clients' cases and the applicable law and determine that they can make good faith arguments in support of their clients' positions.  

Rule 4-3.3 - Candor Toward the Tribunal.

Florida Rule of Professional Conduct 4-3.3 provides that “a lawyer shall not knowingly: (1) make a false statement of fact or law to a tribunal or fail to correct a false statement of material fact or law previously made to the tribunal by the lawyer . . . .” This gatekeeping rule broadly addresses the lawyer’s duty to present honest and accurate information to the court.

The Comments to Rule 4-3.3 state that “th[e] rule sets forth the special duties of lawyers as officers of the court to avoid conduct that undermines the integrity of the adjudicative process. . . . [T]he lawyer must not allow the tribunal to be misled by false statements of law or fact or evidence that the lawyer knows to be false.” To violate the rule, the lawyer must knowingly provide false information. 

If a lawyer uses A.I. to generate a motion or response that is filed with the court without checking the information therein, they technically haven’t “knowingly” provided false information. But in light of the lawyer’s obligation to exercise good faith in conducting and submitting legal research pursuant to Rule 4-3.1, it is arguable that a lawyer’s failure to confirm or in any way assess the legal arguments and cases he or she submits to the court is knowingly making a false statement that he or she is submitting legal arguments that have a basis in the law that supports the client’s position. 

The Comments to Rule 4-3.3 also reference a lawyer making misleading legal arguments. “Legal argument based on a knowingly false representation of law constitutes dishonesty toward the tribunal. A lawyer . . . must recognize the existence of pertinent legal authorities. . . .” Although this Comment generally regards the duty to disclosure adverse authority, “[t]he underlying concept is that legal argument is a discussion seeking to determine the legal premises properly applicable to the case.” A lawyer who submits legal arguments to the court as supportive of the client’s position that he or she has not even read is not proffering honest information to the tribunal. 

Evidence of the need for candor with regard to a lawyer’s use of A.I. in preparing for litigation can be seen in the rules invoked by other courts to avoid being misled by falsely generated legal research. For example, Judge Brantley Starr of the Northern District of Texas issued a specific order to all lawyers coming before the court to certify their use of generative A.I. in court filings. Lawyers had to certify “either that no portion of any filing will be drafted by generative artificial intelligence (such as ChatGPT, Harvey AI, or Google Bard) or that any language drafted by generative artificial intelligence will be checked for accuracy, using print reporters or traditional legal databases, by a human being.”  

Should You Welcome A.I. into Your Law Practice?

According to a recent Thompson Reuters Institute Survey of more than 450 lawyers at large and midsize law firms on ChatGPT and Generative AI within Law Firms:

  • Only 3% of responding lawyers said they use generative A.I. in their law practice; 34% said their law firm was considering using it for legal operations.

  • As much as 82% of those surveyed believe that ChatGPT and generative AI can be applied to legal work; only 51% said they should be applied to legal work; 25% said they were undecided. 

  • A significant amount of responding lawyers—62% (80% of which were law firm partners)—had concerns about the use of generative A.I. in practice; 36% of respondents said they did not know whether their law firm had risk concerns about its use.

  • About 15% of respondents said their law firms have warned employees against unauthorized generative A.I. use; 19% did not know whether their firm had issued any warnings.

  • Only 6% of respondents said their law firms have banned unauthorized A.I. use altogether; 22% did not know whether their firm had banned its use.

Despite the varying perspectives among lawyers, it is clear that many lawyers simply are unsure of what generative A.I. can do and are hesitant to adopt its use in practice.

 A.I. can be a wonderful tool for generic law firm tasks such as drafting letters or other documents that do not require legal research. But for researcher and writing of legal memoranda, A.I. is not a sufficient tool yet.

At Eximius we use humans to perform all research and writing. Eximius Writing Services provides litigation support services including all aspects of legal research and legal writing. We provide quality support services based on accurate and authentic research. If you need assistance drafting pleadings or conducting accurate legal research in your cases, call Eximius Writing Services today at 407-926-0167.

Sahily Picon