Tenth Circuit Provides Guidance on AI and eDiscovery


Morgan v. V2X, Inc. – March 30 decision from the United States District Court for the District of Colorado – provides guidance on the intersection of AI and eDiscovery and examines how the use of AI tools interact with longstanding legal protections. Specifically, the order sheds light on considerations for how protective orders may be drafted to address the use of AI in litigation.

Morgan v. V2X and the Rise of AI in Discovery

In Morgan, pro se plaintiff Archie Morgan alleged he was subjected to a hostile work environment and subsequently terminated based on race, national origin, and retaliation for whistleblowing activity. Defendant V2X, Inc. contends the company discharged Morgan for legitimate, non-discriminatory, non-retaliatory reasons following a workplace complaint and investigation.

This case is notable for the underlying discovery dispute that arose when V2X moved to amend a protective order after both parties started using AI tools. The court addressed two central questions: (1) whether the pro se plaintiff must disclose the specific AI tool he was using given that discovery involved the exchange of confidential information, and (2) how the existing protective order should be amended to govern the parties’ use of AI.

The Discovery Dispute: AI Meets Protective Order

The operative protective order broadly prohibited disclosure of “confidential information,” defined to include medical and personal financial information, private personnel information, trade secrets, and other proprietary business information. When V2X learned Morgan was using AI in connection with confidential information, it wanted to assess whether its confidential information had been compromised. Therefore, it filed a motion seeking to (1) amend the protective order to expressly address AI use and (2) compel Morgan to disclose the specific AI tool he was using.

Morgan argued the name of the AI tool was shielded from disclosure under the work product doctrine, contending that a party’s selection of litigation support tools reveals mental impressions and case strategy, and disclosure would create an unfair “technological gap” between a self-represented party and a well-resourced corporate defendant with access to its own proprietary AI systems. V2X argued it had a legitimate interest in knowing which AI platform Morgan used. It further argued an amendment to the existing protective order was necessary to prevent confidential discovery materials from being fed into mainstream AI platforms that collect and store user data.

The Court’s Analysis: 2 Key Questions

Magistrate Judge Maritza Dominguez Braswell framed the analysis around two questions:“(1) to what extent will work product protections apply to a pro se litigant’s use of AI, and (2) to what extent should a protective order expressly restrict the use of AI?” This blog post, the first of two on this case, addresses whether work product protection covers AI use, according to the Morgan court.

Work Product Protection for AI-Assisted Litigation

The court found that the work product protections provided by Federal Rule of Civil Procedure 26(b)(3) apply to pro se litigants and extend to use of AI tools. It reasoned that the rule’s plain language, which does not condition protection on the involvement of counsel, broadly protects materials “prepared in anticipation of litigation or for trial by or for another party.” Judge Dominguez Braswell noted:

The importance of applying these protections to pro se litigants is magnified in the context of AI — one of the most powerful knowledge tools ever to become available to the masses. This is because pro se litigants are forced to act as both party and advocate, simultaneously. And for the first time in history, widespread access to powerful technology may make that dual role surmountable. A reading of Rule 26(b)(3) that conditions work product protection over AI materials on the involvement of counsel finds no support in the rule’s text and would further disadvantage unrepresented litigants.

(internal quotation marks and citations omitted)

The court rejected the argument that routing information through a third-party AI platform automatically waives work product protection, analogizing to email and digital privacy principles discussed in Carpenter v. United States and United States v. Warshak:

[E]ven though AI use technically ‘discloses’ information to a third party, it is highly unlikely the information will fall into the hands of an adversary absent some legal process to compel it. Thus, AI interactions do not automatically compromise work product protections.

Judge Dominguez Braswell explained:

[G]iven how AI tools function, it is entirely reasonable for a person to expect some privacy and confidentiality when interacting with these tools, even though they understand a third party is behind the tool collecting and storing their information.

Distinguishing Heppner From Morgan

The court also addressed United States v. Heppner, a decision we wrote about in a previous blogHeppner has received much attention in the developing body of AI-related eDiscovery case law, and Judge Dominguez Braswell acknowledged that, at first glance, Heppner might appear to conflict with the Morgan court’s ruling.

However, the court distinguished Heppner on two grounds. First, Judge Dominguez Braswell noted Heppner was a criminal matter, whereas Morgan is a civil case governed by the Federal Rules of Civil Procedure. Second, she observed that in Heppner, there was a gap between the party and counsel because the defendant had acted entirely apart from his attorney — a gap that, as she noted, “does not exist in the pro se context,” where the litigant simultaneously serves as both party and advocate. In reaching this conclusion, the court drew support from Warner v. Gilbarco, Inc., where the court rejected an argument that the plaintiff had waived work product protection by using a public AI tool. As the Warner court reasoned, a waiver of work product protection requires disclosure to an adversary, or disclosure made in a manner likely to result in an adversary obtaining the materials — neither of which is satisfied by inputting information into an AI tool. Taken together, Morgan and Warner signal an emerging distinction in the case law between the work product protections available to pro se litigants and those available to litigants represented by counsel.

Limits of Work Product Protection for AI Tools

According to Judge Dominguez Braswell, work product protection for a pro se party does not extend to all circumstances. When Morgan sought to shield the name of the AI tool he used, the court held he had not carried his burden to demonstrate that disclosing the tool would reveal mental impressions or case strategy. Accordingly, the court ordered Morgan to disclose the name of any AI platform he used to process confidential information.

Takeaways

The Morgan v. V2X decision marks a milestone in the developing body of AI-related eDiscovery case law. As courts grapple with how longstanding legal protections apply to AI-assisted litigation, the question of work product protection is but one piece of the puzzle. Equally important is the question of how counsel should consider drafting protective orders to govern the use of AI in litigation. In our next post, we will examine the Morgan court’s analysis of the competing protective order provisions proposed by the parties, the language the court ultimately adopted, and what that language might mean for litigants navigating AI use in discovery.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *