On August 4, 2025, the USPTO issued a memo to patent examiners with the subject “Reminders on evaluating subject matter eligibility of claims under 35 U.S.C. 101.” [1]
 Much has been made of these reminders and what they might signal in terms of a possible shift in how the Office treats rejections under § 101, in particular for applications involving software and artificial intelligence (AI). [2][3][4] However, it is worth considering that these reminders were not issued to the entire examining corps but instead only to three (out of eight) of the technology centers (TCs) which examine utility patents. While applicants and practitioners may be hopeful that these reminders are the beginning of a sea change in the world of § 101 rejections, especially as related to AI, they should also be concerned about the reminders’ exclusivity and what this may mean for consistency in examination standards across the USPTO.

               Examiners at the USPTO are divided into TCs which are then further subdivided into art units (AUs). Broadly speaking, each TC is directed to a particular type of technology, and each AU is directed to a subcategory of that technology. For example, TC 2100 provides examination for patent applications including “Computer Architecture, Software and Information Security,” and AUs 2121-2129 and 2141-2148 provide examination for patent applications including specifically “Artificial Intelligence.” For another example, TC 3700 provides examination for patent applications including “Mechanical Engineering, Manufacturing and Products,” and AUs 3771-3775 provide examination for patent applications including “Medical & Surgical Instruments, Treatment Device, Surgery and Surgical Supplies.” [5]

               The three TCs to which the August 2025 reminders were directed – TC 2100, TC 2600 (Communications), and TC 3600 (Transportation, Electronic Commerce, Construction, Agriculture, Licensing and Review) – examine most of the utility patent applications which involve AI. Consider, for example, the 163,565 utility patent applications which were filed between January 1, 2015, and December 31, 2023, which have been assigned to a TC and which include the phrase “artificial intelligence” or “machine learning” anywhere in the text of the application:

               This subset was chosen because newer applications may not have been assigned to a TC yet, and older applications which recite “artificial intelligence” or “machine learning” are likely not directed to AI as it is currently understood today. Data is provided by Juristat.

               As shown, 117,695 (71%) of the applications which recite AI or machine learning were examined in one of TC 2100, 2600, or 3600. But this still leaves 29% of applications unaccounted for. In particular, TC 2400 (Computer Networks, Multiplex, Cable and Cryptography/Security) received 22,586 (14%) of the applications mentioning AI which were filed during this period. And yet the August 2025 reminders were not directed to examiners in TC 2400.

               This data can be further broken down. AI has become integrated into a wide variety of disciplines over the last decade; the scope of patent applications which recite AI looked very different in 2023 than it did in 2015. If we break down the data into applications filed from 2015-2019 and applications filed from 2020-2023, we see a corresponding shift in the percentage of applications examined by each TC:

               These charts suggest that, as time goes on and AI becomes integrated into more fields, the percentage of AI applications examined in TC 2100 (where, as noted above, there is an AU specifically dedicated to AI) will decrease, and the percentage of applications examined in other TCs will correspondingly increase. Between 2015 and 2019, 30% of all applications which mentioned AI or machine learning were examined in TC 2100. Between 2020 and 2023, that number was only 24%. Another data point worth noting: the total number of applications mentioning AI has increased dramatically, with about twice as many being filed from 2020-2023 as from 2015-2019. If the trend continues, we might expect both more applications directed to AI and applications directed to AI being examined in a broader range of TCs and AUs.

               This trend begs the question: is the USPTO equipped to handle such a rapidly evolving field? There are two important and interrelated issues at the USPTO that are relevant to this question: 1) application routing and 2) consistency in examination guidelines.

               Application routing at the USPTO

               Any evolution in patent application subject matter is likely to encounter some friction at the USPTO. The organization of TCs at the USPTO and the way application routing is handled are particularly relevant for newer fields like AI, which did not exist in their current form at the time that relevant procedures were put in place.

               Application routing at the USPTO is determined by patent classification. Each application is classified according to a particular scheme, and then the application is routed to a relevant AU based on that classification. There are two classification schemes used for utility patents before the USPTO: the US Patent Classification (USPC) scheme and the Cooperative Patent Classification (CPC) scheme. [6] The USPC scheme has largely been replaced by the CPC scheme, and patents no longer have their USPC class printed on them, only their CPC symbol(s).

               According to the USPTO website, documentation about the USPC scheme exists today only for historical purposes. [6] It is clear from the update record (called “classification orders”) that the USPC scheme has not been updated since 2013. [7] Consider what the landscape of AI looked like in 2013 compared to what it looks like today. Along with other newer fields, like quantum computing (QC), AI is not adequately captured by the USPC scheme, and because the USPC scheme is no longer updated, this will never be fixed.

               One may reasonably wonder why this is a problem. The USPTO website says the USPC scheme is purely historic – why would we expect a “historic” classification scheme to adequately capture AI? Who cares?

               This is a problem because the USPC scheme is still in active use at the USPTO for one key purpose: application routing. Utility patent applications are still, as of 2025, routed to the AU where they will be examined based on USPC class, not CPC symbol. A USPC class is assigned to each application specifically for this purpose. For a brief trial period in 2022, the USPTO attempted to route applications by CPC symbol. This resulted in so many misrouted applications that, rather than fix the problem, they simply reverted to the old procedure of routing by USPC class. As of 2025, there was no indication that the USPTO was planning to move away from USPC-based routing anytime soon, despite incorrect and out-of-date information still listed in MPEP § 909.

               To continue to classify and route applications based on a classification scheme that has not been updated in more than a decade is, to say the least, not best practice.

               Any classification scheme needs to be updated regularly in order to remain relevant, but in particular a classification scheme for patents, where the whole point is that something new is disclosed in every application, cannot effectively serve its purpose if it has not been updated in twelve years. The way that application routing is handled at the USPTO means it is difficult to determine whether the trend observed in the charts above is an intentional shift in how applicants’ and the Office are viewing and handling AI or simply a quirk of the routing system. Has the Office intended to spread AI-related applications across more TCs? Does the evolution of the field mean that applications which integrate AI into specific technologies should be considered differently than applications which are directed to AI in a purely computer science context? Or has any distinction arisen accidentally as it has become harder and harder to classify AI using a classification scheme that is more than a decade out of date?

               As a specific example, should training machine learning models for use in medical or surgical procedures be routed to an AU that handles training machine learning models? Or should it be routed to an AU that handles medical and surgical procedures? Or either? Or both? The answer to this question should not hinge on how a classification scheme last updated in 2013 classifies AI. If the USPC scheme does not adequately capture the technology an application is directed to, there is no real way to assess whether the application has been routed “correctly.” This brings us to another relevant issue at the USPTO: consistency.

               Consistency in examination guidelines at the USPTO

               Contrary to the trends suggested by the charts above, the USPTO does not seem to take into account how AI-related applications will permeate the entire Office. The USPTO further seems to be short-sighted in the applicability of 35 USC § 101 to its entire examining corps, otherwise, why were the August 2025 guidelines issued only to TCS 2100, 2600, and 3600? These three TCs examine most of the AI-related applications that the Office receives, but not all of them. For one example, TC 2400 examines a large number, and for another example, applications which mention “artificial intelligence” or “machine learning” but also include the words “medical,” “surgery,” or “surgical,” anywhere in the text, are very likely to be examined in TC 3700:

               One would expect a similar trend for TCs and AUs in other disciplines which have been quick to adopt AI methods.

               Why is the USPTO issuing guidelines that should be relevant to the entire examining corps only to a subset of examiners? Examiners at the USPTO issue Office actions, not examiner actions, because there is meant to be consistency across the entire Office in how rules and regulations are applied. Providing guidance and reminders to only a subset of the examining corps does not help the Office achieve consistency in these issues, and it creates headaches for practitioners and applicants looking for clarity and consistency across Office actions. As things are, a § 101 rejection from an examiner in TC 3700, who has not received much guidance on this topic, looks different than a § 101 rejection from an examiner in TC 2600. This may make it harder to convince the examiner to withdraw a rejection because, in addition to being unfamiliar with best practices regarding writing a rejection, they may also not be familiar with distinguishing between persuasive and non-persuasive arguments for overcoming it. Conversely, this may lead to further prosecution and litigation headaches down the road if the examiner does not make a § 101 rejection when they should have due to lack of experience and guidance. Meanwhile, an examiner in TC 2100 or TC 2600 may not be familiar with the medical terms they need to understand in order to examine an application about machine learning in medical devices effectively, but as a consequence of the Office’s outdated routing procedures, they are stuck doing the best they can.

               Conclusions

               Many applicants and practitioners are hoping for changes in how the USPTO addresses § 101 rejections, especially as they relate to AI, and soon. Recent developments, such as the August 2025 reminders, have given those in the field renewed optimism that change is on the horizon. But the changes that practitioners and applicants see do not necessarily match what is going on behind the scenes at the USPTO. The way application routing is currently handled at the Office and the lack of consistency in guidelines issued to examiners suggests that the USPTO is not currently equipped to adapt to a field evolving as quickly as AI.

               In addition to hoping for changes directed specifically to § 101, practitioners and applicants should also be looking for the USPTO to move away from USPC-based routing and to improve consistency in examination procedures if any real, intentional change is to be effected. The USPTO website and MPEP are misleading, and in some cases flat-out incorrect, about how the USPC scheme is used at the USPTO and how application routing is handled. The USPTO should provide more transparency in this regard. But at the end of the day, regardless of whether the Office is transparent with applicants about how applications are routed, it is troubling that the Office continues to rely on a classification scheme that has not been updated in more than a decade to classify patent applications. If routing by CPC symbol was so inaccurate, the reason for this needed to be investigated and corrected. The USPC scheme is not suitable for use anymore, and this will only become more of an issue as time goes on and technology evolves even further.

               In the meantime, the USPTO must provide consistent guidance across the whole examining corps regarding § 101 and other statutes. Although the August 2025 guidance was provided to only three TCs, it is clear that many other TCs examine AI-related applications and deal with the associated § 101 analysis.

               Perhaps a sea change in § 101 is coming, but a change in the Office’s general approach to § 101 is not enough. Real change is going to require specific modernization in practices related to application routing and examination guidelines at the USPTO. For that, applicants and practitioners may have to wait a bit longer.

               References

[1] C. Kim, “Reminders on evaluating subject matter eligibility of claims under 35 U.S.C. 101,” 4 August 2025. [Online]. Available: https://www.uspto.gov/sites/default/files/documents/memo-101-20250804.pdf. [Accessed 2023 October 2025].
[2] M. Lew, “How USPTO Examiner Memo Informs Software Patent Drafting,” 8 September 2025. [Online]. Available: https://www.law360.com/articles/2383638/how-uspto-examiner-memo-informs-software-patent-drafting.
[3] R. N. Phelan, “USPTO Issues “Reminders” Supporting AI and Software Patenting and Instructing Patent Examiners on the Limits of Section 101 Patent Eligibility,” 29 September 2025. [Online]. Available: https://www.patentnext.com/2025/09/uspto-issues-reminders-supporting-ai-and-software-patenting-and-instructing-patent-examiners-on-the-limits-of-section-101-patent-eligibility/.
[4] D. Kass, “Squires Jumps Right Into Patent Eligibility Reform,” 1 October 2025. [Online]. Available: https://www.law360.com/articles/2395014/squires-jumps-right-into-patent-eligibility-reform.
[5] USPTO, “Patent Technology Centers Management,” 23 March 2023. [Online]. Available: https://www.uspto.gov/patents/contact-patents/patent-technology-centers-management. [Accessed 23 October 2025].
[6] USPTO, “Patent Classification,” 1 August 2025. [Online]. Available: https://www.uspto.gov/patents/search/classification-standards-and-development. [Accessed 23 October 2025].
[7] USPTO, “Classification Orders,” 27 March 2019. [Online]. Available: https://www.uspto.gov/patents/search/understanding-patent-classifications/classification-orders. [Accessed 23 October 2025].

When we first covered Ex parte Desjardins, we noted that the decision, which was issued just days after Director John Squires took office, could mark the beginning of a new era for AI and software patent eligibility at the USPTO. That prediction now appears well-founded. On November 4, 2025, Director Squires designated the Desjardins decision as precedential, ensuring that its reasoning now binds all patent examiners and the Patent Trial and Appeal Board (PTAB).

Continue Reading Update: Desjardins Decision Made Precedential

On October 8, 2025, the Office announced the “Automated Search Pilot Program,” a new initiative that will use an internal AI tool to conduct a prior art search before an application undergoes substantive examination. The pilot program slated to begin on October 20, 2025 offers applicants a unique, early look at an AI-generated report on potential prior art.  This is the latest development in the USPTO’s ongoing efforts to integrate AI into its processes, following previous enhancements to the Patents End-to-End (PE2E) search suite, such as the AI-powered “Design Vision” tool for image searching in design patents.

How the Pilot Program Works

For a limited number of applications, the USPTO will use its proprietary AI tool to perform a preliminary search of U.S. patents, pre-grant publications, and foreign patent documents leveraging the application’s classification, specification, claims, and abstract.  The result is an Automated Search Results Notice (ASRN), which lists up to 10 of the most relevant prior art documents as ranked by the AI tool.

Key details for participation:
  • Eligibility: The pilot is open to original, non-continuing, non-provisional utility patent applications filed under 35 U.S.C. 111(a).
  • Timeline: Petitions to participate will be accepted from October 20, 2025, until April 20, 2026, or until ~1,600 applications (200 per Technology Center) are accepted.
  • Requirements: Applicants must file a petition (Form PTO/SB/470), pay the corresponding fee ($90 for micro, $180 for small, and $540 for large entities respectively) on the same day the application is filed, file in DOCX format through Patent Center, and be enrolled in the e-Office Action Program.
Takeaways for Practitioners and Applicants

Similar to conducting a prior art search the ASRN provides an early glimpse into the potential hurdles an application may face, creating several possible strategic advantages. Practitioners and applicants can make informed decisions before significant prosecution costs are incurred.  The ASRN is not an Office Action, and there is no requirement to respond; however, it can inform next steps:

  • File a Preliminary Amendment: Proactively amend the claims to better distinguish over the art cited in the ASRN, placing the application in better condition for allowance.
  • Request Deferral of Examination: If the cited art requires more detailed analysis, Applicants can request to defer examination to allow for more time.
  • Consider Abandonment: If the ASRN uncovers highly relevant prior art, the applicant can abandon the application and petition for a refund of certain fees saving future costs.
A Word of Caution and Other Considerations

While the potential benefits of using AI appear worthwhile, practitioners and applicants should approach this pilot with an objective lens, grounded in the current realities of the technology.

  • A False Sense of Security: Commentaries from the patent examiner community suggest that current AI search tools are far from perfect. An ASRN listing no highly relevant art does not guarantee that none exists. A human examiner’s comprehensive search may still uncover a dispositive reference.
  • Quality of AI Results: The program’s effectiveness hinges on the AI tool’s quality. Past AI initiatives have received mixed reviews from examiners. The ASRN should be viewed as a supplemental data point, not a complete search.
  • An Incomplete Picture: The ASRN is limited to 10 documents and is not an exhaustive search. It may highlight key references but will inevitably miss others.
  • Navigating the Duty of Disclosure: The ASRN introduces a new consideration regarding the duty of candor. Upon receipt, the applicant and their representative are officially aware of the cited references, creating a professional obligation to review them for materiality. If a reference is material, the most prudent course remains to submit it on a formal Information Disclosure Statement (IDS).
AI’s Evolving Role at the USPTO

The USPTO is exploring ways to assist examiners and streamline the patent prosecution process for everyone.  It is part of their broader strategy to leverage AI to improve examination.

For now, this is a limited pilot program.  However, its results will undoubtedly inform the USPTO’s next steps in integrating AI more deeply into its operations.  The program offers a potentially valuable, low-risk opportunity to gain an early strategic advantage in the patenting process.  Practitioners with new utility applications to file in the coming months should consider whether their clients could benefit from being among the first to participate.

PatentNext Summary: The Desjardins decision, co-authored by new USPTO Director John Squires, signals a potential shift toward greater patent eligibility for AI and software innovations. By vacating a § 101 rejection and warning that “categorically excluding AI innovations from patent protection in the United States jeopardizes America’s leadership in this critical emerging technology,” the Appeals Review Panel (ARP) emphasized that eligibility should not be used as a catch-all to reject claims better addressed under §§ 102, 103, and 112. For practitioners, the decision highlights the importance of describing concrete technical improvements in the specification, tying those improvements directly to the claim language, and framing claims as technological solutions rather than abstract ideas. This marks a potentially significant recalibration of the USPTO’s approach to AI-related claims under Director Squires’ leadership.

Continue Reading Artificial Intelligence Patent Claims Get a Boost: USPTO Director Vacates §101 Rejection in Desjardins

PatentNext Summary: The USPTO issued “Reminders” for examiners in Tech Centers 2100/2600/3600 addressing §101 eligibility for software and Artificial Intelligence(AI) / Machine  Learning (ML)-related inventions; while not changing the MPEP, the guidance is meant to sharpen examination practice. It clarifies Step 2A, Prong One by limiting “mental process” to what can be practically performed in the human mind—stating that AI claim limitations not performable mentally are not “mental processes”—and by distinguishing claims that merely involve a judicial exception (e.g., Example 39) from those that recite one (e.g., Example 47). For Step 2A, Prong Two, examiners must evaluate the claim as a whole to identify a practical application, giving weight to meaningful additional limitations and to improvements in computer capabilities or a technical field, even if the improvement is only implicit in the specification. The Reminders caution against oversimplified “apply it” rejections, require a preponderance of evidence for “close call” §101 rejections, and reinforce compact prosecution that fully addresses §§102/103/112 for every claim in the first action.

Continue Reading USPTO Issues “Reminders” Supporting AI and Software Patenting and instructing Patent Examiners on the Limits of Section 101 Patent Eligibility 

PatentNext Summary: In a precedential decision, the U.S. Court of Appeals for the Federal Circuit reversed a district court’s §101 dismissal of patent claims relating to an automated system for dumbbell weight selection and adjustment, finding that the claims were not abstract under Alice step one and therefore are patent-eligible. The Federal Circuit held that, contrary to the district court’s conclusion, the claims included meaningful limitations which provide enough specificity and structure to satisfy § 101 even though the limitations were allegedly found in the prior art. The Federal Circuit re-emphasized the importance of considering patent claims in their entirety as a whole, which the district court improperly failed to do.

****

The U.S. Federal Circuit Court of Appeals reversed a decision of the U.S. District Court for the District of Utah, the latter of which had invalidated a set of patent claims claiming an automated system for dumbbell weight selection and adjustment, and remanded the case for further proceedings. PowerBlock Holdings, Inc. v. iFit, Inc., No. 24-1177 (Fed. Cir. 2025)

In the Utah district court, PowerBlock accused iFit of infringing U.S. Patent No. 7,578,771 (the ’771 Patent”) titled “Weight Selection and Adjustment System for Selectorized Dumbbells including Motorized Selector Positioning,” which “relates generally to exercise equipment” and more particularly “to selectorized dumbbells and to an overall, integrated system for selecting and adjusting the weight of a selectorized dumbbell or a pair of selectorized dumbbells.” ‘771 Patent at 1:15–19.

The district court held that the claims fail the two-step test and are patent ineligible because, at Alice step one, claims 1-18 and 20 of the ‘771 Patent were directed to the abstract idea of automated weight stacking and “implemented using generic components requiring performance of the same basic process”, and because, at Alice step two, claims 1-18 and 20 did “not add significantly more than the abstract idea of the end-result of an automated selectorized dumbbell.” Id. at *9.” (internal citations omitted).   

PowerBlock appealed this decision to the Federal Circuit Court.

The Federal Circuit court reviewed claim 1 as a representative claim:

   1.  A weight selection and adjustment system for a selectorized dumbbell, which comprises:

   (a) a selectorized dumbbell, which comprises:

(i) a stack of nested left weight plates and a stack of nested right weight plates;

(ii) a handle having a left end and a right end; and

(iii) a movable selector having a plurality of different adjustment positions in which the selector may be disposed, wherein the selector is configured to couple selected numbers of left weight plates to the left end of the handle and selected numbers of right weight plates to the right end of the handle with the selected numbers of coupled weight plates differing depending upon the adjustment position in which the selector is disposed, thereby allowing a user to select for use a desired exercise weight to be provided by the selectorized dumbbell; and

   (b) an electric motor that is operatively connected to the selector at least whenever a weight adjustment operation takes place, wherein the electric motor when energized from a source of electric power physically moves the selector into the adjustment position corresponding to the desired exercise weight that was selected for use by the user.

Based on its Alice step one analysis, the Federal Circuit determined that the district court incorrectly concluded that claim 1 is “directed towards the general end of automated weight stacking” because claim 1 “seek[s] to claim systems comprising weight selection and adjustment systems consisting of the two or three ‘generic’ components, rather than any particular system or method of selectorized weight stacking” (Id. at *6), thereby “giv[ing] rise to a preemption problem” Id. at *7.

Specifically, the Federal Circuit court found that the district court had erroneously ignored limitations required by claim 1 when the district court did not consider the limitations reciting “an electric motor, coupled to a selector movable into different adjustment positions, and energizing the motor to physically move the selector via the coupling between the motor and the selector.” Id. at *8-9. According to the Federal Circuit, the district court was wrong to ignore such limitations “merely because [the ignored limitations] can be found in the prior art” Id. at *11. 

As such, the district court did not properly consider, under Alice step one, the claims in their entirety to ascertain whether their character as a whole is directed to excluded subject matter” Id. at  *10 (internal citations omitted).

Further, the Federal Circuit warned “parties and tribunals not to conflate the separate novelty and obviousness inquiries under 35 U.S.C. § 102 and 103, respectively, with the step one inquiry under § 101.”  Id. at f.3.

****

This precedential Federal Circuit decision is promising for those pursuing patents directed to mechanical automation systems. Such practitioners should attempt to draft claims that provide physical structure and interaction while avoiding functional language that could be construed as abstract. Even if some of the physical components are known, their claimed combination and interaction may still yield patent-eligible subject matter. We note, though, that pure software-based automation may face tougher scrutiny.

Additionally, when reviewing office actions or during litigation, practitioners can utilize this decision to push back on §101 rejections that ignore the claim as a whole or conflate subject matter eligibility with novelty/obviousness.

Subscribe to get updates to this post or to receive future posts from PatentNext. Start a discussion or reach out to the author, Lilian Y. Ficht , at lficht@marshallip.com or 312-423-3445. Connect with or follow Lilian on LinkedIn.

PatentNext Summary: Recent rulings from the Northern District of California in Bartz v. Anthropic and Kadrey v. Meta provide the first substantive guidance on how the fair use doctrine applies to AI training, particularly for large language models (LLMs). Both courts found that using lawfully obtained copyrighted books for LLM training can qualify as “highly transformative” and support a fair use defense, while the use of pirated works may result in liability—especially if market harm is demonstrated. These cases highlight the growing legal emphasis on the source of training data and its market impact, offering a framework for AI developers to mitigate risk. The decisions underscore the need for lawful data acquisition, internal guardrails to prevent regurgitation of copyrighted content, and contractual protections for authors and data owners amid an evolving copyright landscape.

****

Recent rulings by two judges in the U.S. District Court for the Northern District of California offer the first merits-based guidance on how “fair use” applies to large artificial intelligence (AI) training, and in particular, language model (LLM) training. These decisions are Bartz v. Anthropic, 2025 WL 1741691 (N.D. Cal. June 23, 2025) (referred to herein as “Anthropic”) and Kadrey vs. Meta Platforms, 2025 WL 1752484 (N.D. Cal. June 25, 2025) (referred to herein as “Meta”). 

The courts found that using lawfully obtained copyrighted texts for training LLMs can be considered “highly transformative” and can fall under the copyright defense of “fair use,” but that using pirated materials could lead to liability, particularly if the use affects the market for the original works. These rulings shift the legal focus toward the source of training data and whether the AI model’s output causes market harm, setting the stage for future litigation around this issue.

The below article provides case overviews of the Anthropic and Meta cases, explores the four factors of the fair use copyright defense in view of LLM training  for each case, and concludes with related implications and takeaways for AI model developers, copyright owners, and AI model end users. 

Case Overviews

Bartz v. Anthropic PBC 

In Bartz v. Anthropic PBC, the court addressed the complex intersection between copyright law and artificial intelligence training. The plaintiffs — authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, along with their affiliated companies — brought suit against Anthropic PBC, an AI firm behind the Claude language model, alleging that Anthropic had unlawfully copied their copyrighted books. Anthropic assembled a massive digital library by both purchasing and pirating millions of books, which it then used to train large language models (LLMs), including Claude. 

At issue was whether Anthropic’s various uses of the copyrighted works — including training LLMs, digitizing print copies, using digital pirated copies, and maintaining a central research “library” (a digital database of the copyrighted books) — qualified as “fair use” under 17 U.S.C. § 107. The court evaluated each use against the four statutory fair use factors and found that while some uses were transformative and thus lawful, others — particularly the use of pirated copies to build a permanent library — were not protected under the fair use doctrine.

Kadrey v. Meta Platforms Inc.

In Kadrey v. Meta Platforms Inc., thirteen prominent authors, including Sarah Silverman and Junot Díaz, filed suit against Meta for allegedly using their copyrighted works—downloaded from unauthorized “shadow libraries”—to train Meta’s large language models (LLMs), particularly the Llama series.

The plaintiffs argued that Meta’s conduct could not qualify as fair use, focusing on harms to the market for their works and the unauthorized nature of Meta’s data acquisition. In contrast, Meta contended that its actions constituted fair use as a matter of law, emphasizing the transformative purpose of LLM training. The court granted summary judgment in favor of Meta, noting the plaintiffs’ failure to adequately substantiate the core theory that Meta’s use would cause significant market harm. However, the ruling applies narrowly to these plaintiffs and does not resolve broader questions about the legality of using copyrighted works in AI training.

Copyright “Fair Use” (Four Factor Analysis by the Courts)

Both the Anthropic court and Meta court considered the “fair use” of the copyrighted works. Fair use is a defense typically raised in U.S. copyright disputes and includes an analysis of a four-factor test. Fair use constitutes a defense to allegations of copyright infringement: 

[T]he fair use of a copyrighted work … for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research, is not an infringement of copyright. In determining whether the use made of a work in any particular case is a fair use the factors to be considered shall include[:]

1. The purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;

2. The nature of the copyrighted work;

3. The amount and substantiality of the portion used in relation to the copyrighted work as a whole; and

4. The effect of the use upon the potential market for or value of the copyrighted work.

Anthropic at *6.

The following sections consider each of these four factors for both the Anthropic and Meta cases. In addition, the following sections focus on at least two stages of the AI model development and training process where AI model developers typically face copyright infringement: the first, when the AI model developer stores the copyrighted works in a computer memory for the purpose of training. The second stage is when the trained AI model produces an output – is such output the same or substantially similar as the original copyrighted work or a derivative thereof? For example, for the second of these, a court could focus on whether the output of a given AI model was significantly transformative as opposed to a copy or a derivative work of the original copyrighted material. An AI model can be probed via prompted engineering to determine whether it will output substantially similar works or derivative works from the original copyrighted material. See Getty Images v. Stability AI, Case 1:23-cv-00135, (D. Del. Mar. 29, 2003) (Amended Complaint) (Dkt. 13).  

Regarding the first of these stages, and as discussed further below, both the Anthropic and Meta courts were clear that training an AI model with copyrighted works was sufficiently transformative so as to support a fair use defense. In fact, at least according to these two cases, this is one of, if not the most important, factors for finding fair use in each of these cases. 

Regarding the second of these stages, in both Anthropic and Meta cases, the plaintiff-authors failed to allege that output of the respective LLM models produced a same or substantially similar output, and the courts were verbose in highlighting this failure of the authors. That is, had the authors provided additional evidence and arguments regarding a same or substantially similar output from the accused models, then the respective courts indicated that they would have readily (and eagerly) addressed this issue. The authors having failed to raise this issue, each of the Anthropic and Meta courts did not rule on the issue and instead highlighted the failure of the authors to do so. We can expect future plaintiffs to address the second of these in a more fulsome manner.

1. The Purpose and Character of the Use

This factor examines whether the use was transformative and whether it served a commercial or nonprofit purpose.

Bartz v. Anthropic PBC 

Regarding training LLMs, the court concluded that Anthropic’s use of the plaintiffs’ works to train LLMs was “spectacularly transformative.” Training involved complex processes like tokenization and statistical modeling to teach the LLM to generate new, human-like text. Importantly, the plaintiffs did not allege that the trained Claude system reproduced their works or outputs in a same or substantially similar manner (the hallmark of a copyright infringement claim). The court likened this to a person reading and learning from a book to become a better writer — a transformative use that did not usurp the market for the original works.

Regarding purchased Print-to-Digital book conversion, Anthropic also purchased millions of print books, scanned them, and stored digital copies in its central library. Because each scanned copy replaced its purchased print counterpart, and the digital format merely facilitated internal storage and searchability, the court deemed such use (i.e., a format change of the original purchased works) as contributing favorably to the first factor demonstrating fair use.

In stark contrast, regarding pirated digital book copies, the court found that Anthropic’s use of pirated copies to build a permanent, general-purpose library was not transformative. These copies were acquired to avoid “legal/practice/business slog” and were kept indefinitely, even when not used for training. The court emphasized that fair use does not grant AI developers blanket permission to steal and store works simply because some might later be used in transformative ways.

Kadrey v. Meta Platforms Inc.

The first factor—whether the use is transformative and/or commercial—strongly favored Meta. The court found that Meta’s use of copyrighted books to train its LLMs served a transformative purpose distinct from the original works. While the plaintiffs’ books were intended for consumption as literary or educational texts, Meta used them to extract linguistic patterns and structures to power a tool capable of responding to diverse user prompts.

Even though Meta’s ultimate goal was commercial, potentially generating up to $1.4 trillion in revenue over a decade, the transformative nature of its use was decisive. The court noted that copyright law generally gives more leeway to commercial uses when the new work adds something significantly new. The court also rejected arguments equating LLM training with simple repackaging or copying, noting that Meta’s models do not meaningfully output the plaintiffs’ original texts. In particular, Meta’s LLM was found incapable of reproducing any significant portion of the plaintiffs’ copyrighted books, even under conditions designed to provoke memorization. For example, the court noted that Meta’s expert employed an “adversarial prompting” technique specifically intended to elicit material from training data, yet no model produced more than 50 tokens (words and punctuation) from the plaintiffs’ works. The plaintiffs’ own expert achieved similar results in only 60% of tests using the most responsive Llama variant, and further confirmed that Llama was unable to reproduce any substantial portion of the books. Such findings supported the conclusion that Llama could not be used to read or meaningfully access the plaintiffs’ copyrighted works.

Further, Meta’s controversial use of shadow libraries, while potentially relevant to bad faith, did not outweigh the fundamentally different and transformative nature of the use.

2. The Nature of the Copyrighted Work

This factor considers the creativity and factual nature of the original works.

Bartz v. Anthropic PBC 

All of the plaintiffs’ books — both fiction and nonfiction — were published and expressive. The court acknowledged that expressive, creative works are closer to the “core” of copyright protection. Because Anthropic specifically valued these works for their expressive qualities in both training and building its library, the court found this factor weighed against fair use across all types of uses — even for those ultimately deemed lawful under other factors.

Kadrey v. Meta Platforms Inc.

This factor favored the plaintiffs. Their works—novels, memoirs, and plays—are highly creative and fall within the heartland of copyright protection. However, courts have historically afforded this factor limited weight, especially when the works have already been published. The court noted that while Meta may not have used the books for their creative expression directly, the statistical patterns it sought to extract were themselves a product of expressive choices like word order, syntax, and style—all protectable elements.

Nonetheless, the court did not view this factor as significantly altering the outcome of the fair use analysis, particularly in light of the highly transformative use under Factor One.

3. The Amount and Substantiality of the Portion Used

Here, the courts assessed whether the amount copied was reasonable in relation to the use.

Bartz v. Anthropic PBC 

Regarding training LLMs, although Anthropic copied the entirety of plaintiffs’ works for training, the court found this was reasonable given the monumental volume of text required for training effective LLMs. The absence of any public-facing reproduction of plaintiffs’ works further supported the finding of fair use.

Regardingpurchased Print-to-Digital book conversion, because the digital versions replaced the destroyed print copies and were not shared externally, the court held that copying the entire work was reasonable and aligned with the intended internal use.

In contrast, regarding pirated digital book copies, the court found that copying entire works from pirate sites — particularly to build a centralized research library of indefinite use — was not reasonable. The purpose extended beyond any specific transformative use, and the court noted that almost any level of unauthorized copying would be excessive under these circumstances.

Kadrey v. Meta Platforms Inc.

Although Meta copied the plaintiffs’ books in their entirety, the court held that this factor favored Meta due to the necessity of full-text ingestion for the transformative purpose of LLM training. The extent of the copying was deemed reasonable given the technical requirements of training such models. The court emphasized that the key consideration was not the sheer amount of copying, but whether the amount used was excessive in light of the use’s purpose.

Given that LLMs perform better with more high-quality data and that partial books would not serve the training purpose effectively, copying entire works was justified and did not weigh against fair use.

4. The Effect of the Use Upon the Market

This factor evaluates whether the use harms the market for or value of the original work. This factor is typically the most critical in a fair use analysis and posed the greatest challenge for the plaintiffs.

Bartz v. Anthropic PBC 

Regarding training LLMs, because there was no allegation that Claude’s outputs were infringing or substituted for the plaintiffs’ books, the court found no adverse market effect. Even potential market competition from LLM-generated works was deemed irrelevant under copyright law, which does not protect authors from generic competition.

Regardingpurchased Print-to-Digital book conversion, although Anthropic might have foregone purchasing digital copies, the court found no evidence of redistribution or market usurpation. The internal use of a legally purchased print copy — albeit in a different format — did not harm the existing market in a way actionable under copyright law.

Regarding pirated digital book copies,this use had a direct and deleterious effect on the market. By copying works it could have lawfully purchased, Anthropic displaced market demand on a copy-for-copy basis. The court emphasized that permitting such behavior would effectively destroy the publishing industry, as it would incentivize theft in the name of downstream transformative use.

Kadrey v. Meta Platforms Inc.

The court identified three potential types of market harm: (1) regurgitation of the original works, (2) loss of licensing revenue for AI training, and (3) market dilution through proliferation of similar AI-generated content.

The first two arguments failed due to insufficient evidence. Llama was not capable of meaningfully regurgitating the plaintiffs’ works, and courts do not recognize a right to licensing revenue for transformative uses. While the third argument—market dilution—was conceptually strong and could be highly relevant in future cases, the plaintiffs failed to plead or support it with evidence. Thus, they could not create a triable issue of fact on this point.

The court stressed that while market dilution from AI-generated content may be a valid concern under copyright law, it must be substantiated with evidence. As such, Factor Four also favored Meta.

Court’s Conclusions and Takeaways 

Bartz v. Anthropic PBC 

The Anthropic court’s overall analysis reflected a nuanced application of the fair use doctrine. It recognized fair use for the training of LLMs using copyrighted books, which was considered transformative. So was the scanning of purchased print copies for internal digital storage and use.

However, the Anthropic court denied the fair use defense for the use of pirated copies for building a central research library, which  was not considered transformative and failed all four fair use factors.

Nonetheless, the Anthropic court granted summary judgment in favor of Anthropic for the training and format conversion uses, but denied for the pirated library copies. The case is set to proceed to trial to determine liability and damages for the unauthorized acquisition and retention of those pirated materials.

This decision reinforces that while AI development may qualify for fair use under certain conditions, courts will scrutinize the methods and intentions behind data acquisition — especially where piracy is involved. AI innovators must balance transformative use with lawful sourcing to stay within the bounds of copyright law.

Kadrey v. Meta Platforms Inc.

The ruling in Kadrey v. Meta Platforms Inc. offers a nuanced but limited precedent. While Meta prevailed on summary judgment, the court’s decision hinged on the plaintiffs’ failure to develop and present a compelling case on the most critical issue—market harm. The decision does not validate Meta’s use of copyrighted works in AI training as lawful per se; rather, it underscores the importance of presenting the right evidence under the fair use framework.

This case may serve as a roadmap for future litigants—highlighting the potential viability of market dilution arguments and signaling that courts remain receptive to fair use challenges in the context of transformative AI technologies, so long as they are properly developed and supported.

Also, being the second of the two cases, the Meta court voiced its differences and concerns with the Anthropic court, stating that Anthropic court “focused heavily on the transformative nature of generative AI while brushing aside concerns about the harm it can inflict on the market for the works it gets trained on.” Id. at *11. The Meta court took issue with the Anthropic court’s reasoning that “[s]uch harm would be no different … than the harm caused by using the works for ‘training schoolchildren to write well,’ which could ‘result in an explosion of competing works.’” Instead, the Anthropic court was sympathetic to the plaintiff-author’s concern regarding market harm: “when it comes to market effects, using books to teach children to write is not remotely like using books to create a product that a single individual could employ to generate countless competing works with a miniscule fraction of the time and creativity it would otherwise take. This inapt analogy is not a basis for blowing off the most important factor in the fair use analysis.” Id. 

Conclusion

The court decisions involving Meta and Anthropic mark the beginning of what is expected to be a wave of legal rulings addressing copyright issues in generative AI. While these initial cases centered on large language models (LLMs) trained on books, future outcomes may vary depending on the nature of the training data and output. Notably, cases involving image-based content like Getty v. Stability AI or code-based output, such as in Doe 1 v. Github, Doe 1 v. GitHub, Inc., No. 4:22-cv-06823-JST (N.D. Cal.), may yield different legal analyses, highlighting the evolving complexity of copyright law as applied to various AI-generated modalities.

For example, such cases also explore the important question: what type of relationship should copyright holders have with AI model developers? Such a question is not only important to authors of books, articles, and other such written materials, but is also important to implications regarding computer software code, which is the foundation of most companies’ IP. For example, if an AI tool is used to create valuable source code for a companies’ product or service, who owns that source code (if anyone per the authorship requirements under U.S. copyright law), and is that source code subject to potential copyright violations for using or generating the same or substantially similar code from which the AI tool was trained from? 

Implications for Artificial Intelligence (AI) Model Developers. For AI model developers and related stakeholders—especially tech platforms, cloud providers, publishers, and data brokers—these decisions can signal a need for immediate action. Organizations should consider auditing their training datasets and vendor agreements to ensure all source materials are lawfully obtained, carefully document any market impact, and update internal policies accordingly. Legal and technical leaders should consider collaborating closely to align data practices with emerging legal expectations. For example, one approach from the Meta and Anthropic decisions can involve digitizing legally purchased physical books and then destroying the originals. To further reduce legal exposure, LLM developers can implement output guardrails that prevent or minimize the reproduction of copyrighted content. 

Implications for Copyright Owners. Copyright owners may want to keep sensitive data a trade secret. If desired, a copyright owner seeking to license its private data for training purposes may want to consider doing so under a license agreement that includes privacy restrictions pursuant to a non-disclosure agreement (NDA) to prevent the data from leaking to the public. One of the main issues for copyright owners in the Anthropic and Meta case was that the copyrighted works were public, such that the authors could not control their use for training. This will always be the case for books and other copyrighted works intended for public consumption. But for trade secret data, such as proprietary datasets, more control can be exercised to monetize valuable datasets for AI training.   

Implications for AI Model Users. Companies utilizing large language models (LLMs) can take key measures when contracting with LLM developers. First, they should consider auditing the training data by requesting a comprehensive list of datasets used to train or fine-tune the model, ensuring no pirated content from shadow libraries is included. Second, they could also consider verifying that the LLM incorporates effective guardrails to prevent the output of copyrighted material, with internal testing by creative staff to confirm their effectiveness. Finally, companies should consider negotiating strong indemnification provisions to protect against potential copyright infringement claims, recognizing that while current litigation has focused on developers, users may still face some legal exposure.

****

We can expect appeals from these cases and the appellate courts to take up these issues and provide guidelines. However, this could take several years, and these issues will likely find their way to the Supreme Court for ultimate resolution. This assumes, of course, that Congress does not act first to provide a statutory framework. 

****

Subscribe to get updates to this post or to receive future posts from PatentNext. Start a discussion or reach out to the author, Ryan Phelan, at rphelan@marshallip.com or 312-474-6607. Connect with or follow Ryan on LinkedIn.

PatentNext Summary: In Brightex Bio-Photonics, LLC v. L’Oreal USA, Inc., the U.S. District Court for the Northern District of California invalidated patent claims relating to AI-driven cosmetic recommendations, finding them directed to an abstract idea under 35 U.S.C. § 101. The court held that while the specification referenced artificial intelligence, the claims themselves failed to include any specific AI implementation or technological improvement. Brightex argued that elements such as a “photo guide” improved facial data acquisition, but the court found these features to be conventional and lacking inventive contribution. The decision highlights the importance of drafting software and AI-related claims that incorporate technical features demonstrating improvements to underlying technology, serving as a reminder for practitioners to align with established patent eligibility standards.

****

The U.S. District Court of the Northern District of California (N.D. Cal.) recently invalided a set of patent claims allegedly claiming artificial intelligence (AI) technology. Brightex Bio-Photonics, LLC v. L’Oreal USA, Inc., 2025 U.S.P.Q.2d 412 (N.D. Cal. 2025).

Brightex had accused L’Oreal of infringing U.S. Patent No. 9,842,358 (the ’358 Patent”) titled “Method for Providing Personalized Recommendations” in the field of cosmetology and specifically related to “the cosmetic improvement of a person’s face.” Id. at 2 (citing ’358 Patent at 1:8-10).

The court reviewed Claim 16 as a representative claim:

      16. A computerized method for providing prioritized skin treatment recommendations to a user, comprising:

receiving from an electronic device image data of a user’s face, wherein the electronic device comprises a camera and a display, wherein the image data is obtained via said camera, and wherein said electronic device presents on the display a photo guide indicating how the user’s face should be positioned with respect to the camera when the image data is obtained;

transforming via a computer said image data via image processing into measurements in order to identify at least two skin characteristics of the user from the received image data;

calculating a severity rating for each of the at least two user skin characteristics by:

accessing stored population information comprising measurements for at least two skin characteristics of a population of the same type as the at least two skin characteristics of the user, wherein each of the measurements for the at least two population skin characteristics comprises a mean value and a standard deviation value;

comparing each of the measurements of the at least two user skin characteristics to the measurements of same type population skin characteristic;

determining by how much each of the measurements of the at least two user skin characteristics deviates from the mean value and the standard deviation value of the same type population skin characteristic;

assigning higher severity rating to the user skin characteristic which deviates furthest than at least one standard deviation of the same type population skin characteristic; and

for a subset of the user skin characteristics with the highest severity rating, selecting or more skin treatment recommendations from stored skin treatment recommendations based on the subset of the user skin characteristic with the highest severity rating; and

providing to the electronic device the selected one or more skin treatment recommendations.

In its complaint, Brightex included a section describing the invention, including its “advanced and innovative technology relating to the recognition and computerized analysis of facial features.” Id. at 8.

The complaint also described how the invention used Artificial Intelligence (AI) with commercially available smart phone” technology “in order to accurately asses skin condition to recommend the correct cosmetics and skincare treatments.” Id.

L’Oreal filed a motion to dismiss the complaint (pursuant to Fed.R.Civ.P 12(b)(6)), arguing that the ’358 patent was invalid for being directed to an abstract idea without an inventive concept pursuant to 35 USC section 101. Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 573 U.S. 208 (2014). In particular, L’Oreal argued that the claims were directed to “the abstract idea of recommending treatments based on the severity of a person’s skin characteristics and rely solely on generic computer components to carry out that idea.” Brightex Bio-Photonics, 2025 U.S.P.Q.2d 412 at 10.

Brightex countered by arguing that a “photo guide” (as recited in the claims) is “used in a process specifically designed to achieve improved facial data acquisition and subsequently an improved identification of skin defects and the severity of those defects.” Id. at 13.

Thus, according to Brightex, the claim was sufficiently technical and should at least be allowed to proceed beyond the pleadings phase of the case.

The Northern District court disagreed. While the patent’s specification described AI features related to the invention, the failure of the patent to incorporate those features in the claims doomed the patent claim. In addition, the patentee had also failed to describe how the claimed photo guide provided an improvement to the underlying device – instead, the photo guide was claimed as used in a prior art manner, e.g.:

There is nothing in the claims or specification that suggests the “photo guide” is directed at solving any technological problem or doing anything more than ensuring the user’s face is positioned so as to obtain a usable digital image.

Id. at 27.

Accordingly, the Northern District court invalidated the claims as abstract and subsequently dismissed the allegations regarding the ’358 patent from the case.

The Northern District court’s treatment of the claims comes as no surprise. As I regularly discuss on PatentNext, as well as practice with respect to the patents I prepare for my clients, a patent drafter should incorporate technical features (e.g., such as AI features) into the claims themselves that demonstrate an improvement to the underlying device. The Federal Circuit has repeatedly identified this as one of three hallmarks for developing a strong software-based patent in the U.S. See PatentNext: How to Patent Software Inventions: Show an “Improvement.” Without this approach, a patent application can not only be subjected to a Section 101 rejection during prosecution, but a later-issued patent can also be invalidated for the same reasons, as was the case in Brightex Bio-Photonics.

Patent practitioners would be well served to prepare patent applications in accordance with this guidance, and this case serves as a cautionary tale for failure to do so.

****

Subscribe to get updates to this post or to receive future posts from PatentNext. Start a discussion or reach out to the author, Ryan Phelan, at rphelan@marshallip.com or 312-474-6607. Connect with or follow Ryan on LinkedIn.

PatentNext Summary: In two recent decisions, the Federal Circuit reaffirmed that merely applying artificial intelligence or digital techniques to a specific “field of use” does not satisfy patent eligibility under 35 U.S.C. § 101. In Recentive Analytics v. Fox Corp., claims directed to AI-assisted television scheduling were deemed abstract for lacking inventive implementation. Similarly, in Longitude Licensing Ltd. v. Google LLC, claims involving digital image correction were invalidated because they recited only functional, results-oriented language without explaining how the technical improvement was achieved. These rulings emphasize that to be patent-eligible, claims must include specific, technical details that demonstrate an actual improvement over prior art—not just a novel application of generic technology.

****

In a recent decision, the Federal Circuit found patent claims ineligible that claimed machine learning but otherwise applied generically to a “Field-of-Use,” i.e., to automatically scheduling regional television broadcasts. See Recentive Analytics, Inc. v. Fox Corp., No. 2023-2437 (Fed. Cir. Apr. 18, 2025). In that case, the Federal Circuit rejected the idea that applying AI to a novel domain—such as television scheduling— could rescue the claims. According to the Federal Circuit, a so-called “field-of-use” limitation is insufficient to render an abstract idea patent eligible. Merely moving generic AI into a different industry does not convert it into an inventive concept under 35 U.S.C. §  101 (patent eligibility). For additional discussion of Recentive, see PatentNext: Federal Circuit finds Generic AI Claims to be Abstract.

In a more recent decision, the Federal Circuit once again found generic “field-of-use” claims invalid under Section 101. See Longitude Licensing Ltd. v Google LLC, U.S.P.Q.2d 690 (Apr. 30, 2025). In the Longitude Licensing case, the Federal Circuit found invalid claims directed to performing digital image correction techniques via a computer. The patent specifications described identifying the subject, or “main object,” of an image and adjusting the main object image data by using “correction conditions,” which include any kind of “statistical values and color values” that correspond to the “properties” of the main object.

Claim 32 of one of the patents is representative and is reproduced below:

    32. An image processing method comprising:

determining the main object image data corresponding to the main object characterizing the image;

acquiring the properties of the determined main object image data;

acquiring correction conditions corresponding to the properties that have been acquired; and

adjusting the picture quality of the main object image data using the acquired correction conditions;

wherein each of the operations of the image processing method is executed by an integrated circuit.

The district court had found that claim 32 was abstract under Section 101 because claim 32 was generic, functional, and “ends-oriented.”

The Federal Circuit affirmed. In particular, the Federal Circuit cited its analysis in Recentive, finding claim 32 abstract because it generically recited the use of new data (e.g., the correspondence between the main object data and correction conditions as recited in claim 32) in the field of image processing but failed to disclose how to implement the concept. Like the claims in the Recentive decision, claim 32 in Longitude Licensing was a generic “field of use” claim where neither the claims nor the specifications describe how any improvement was accomplished. Claim 32 was abstract because it was “framed entirely in functional, results-oriented terms.” 

The Federal Circuit refused to save claim 32 by importing technical disclosure from the specification into the claim so that it provided the same degree of technical specificity as found in other Federal Circuit decisions demonstrating proper claim specificity. See McRo, Inc. v. Bandai Namco Games of America Inc., 837 F.3d 1299, 1313 (Fed. Cir. 2016) (as cited by the Federal Circuit).

Conclusion

The Longitude Licensing decision provides a further lesson for patent practitioners for drafting a patent application in a manner that adheres to the Federal Circuit’s three-part framework for demonstrating a technical “improvement,” which, if implemented correctly, should include (1) a description of the improvement in the patent specification; (2) a description of how the improvement differs from, and overcomes the prior art; and (3) inclusion of at least some aspect of the improvement in the claims. Claim 32 failed at least the third part of this test, and it was fatal for the plaintiff’s case. For more details on claiming an improvement, see PatentNext: How to Patent Software Inventions: Show an “Improvement.” 

****


Subscribe to get updates to this post or to receive future posts from PatentNext. Start a discussion or reach out to the author, Ryan Phelan, at rphelan@marshallip.com or 312-474-6607. Connect with or follow Ryan on LinkedIn.

PatentNext Summary: The Federal Circuit’s decision in Recentive Analytics, Inc. v. Fox Corp. found that applying generic machine learning techniques to a new environment, without a specific technological improvement, is patent-ineligible under 35 U.S.C. § 101. The court emphasized that claims must articulate concrete technological advancements rather than merely applying established methods to different domains. The ruling offers key guidance for patent practitioners, highlighting the need for detailed descriptions of technical innovation and cautioning against relying on field-of-use limitations or functional claiming. As AI technologies continue to advance, careful patent drafting that focuses on novel implementations will be critical for surviving eligibility challenges.

****

The Federal Circuit’s recent decision in Recentive Analytics, Inc. v. Fox Corp., No. 2023-2437 (Fed. Cir. Apr. 18, 2025), marks another significant moment in the evolving intersection of artificial intelligence (AI) and patent law. The ruling affirmed the district court’s dismissal of claims under 35 U.S.C. § 101, holding that applying generic machine learning to a new data environment—without claiming a specific improvement to the technology itself—constitutes an abstract idea and is therefore patent-ineligible.

This case is notable not just for its holding, but also for the clarity it offers on how courts are likely to assess the eligibility of AI-driven innovations going forward. For legal practitioners and applicants alike, the decision offers both a cautionary tale and a guidepost on how to craft applications that can survive § 101 scrutiny.

On a lighter note, the Federal Circuit did recognize the newness and importance of machine learning, and provided (in its conclusion) a statement qualifying its decision to generic machine learning patent claims:

Machine learning is a burgeoning and increasingly important field and may lead to patent-eligible improvements in technology. Today, we hold only that patents that do no more than claim the application of generic machine learning to new data environments, without disclosing improvements to the machine learning models to be applied, are patent ineligible under § 101.

Background: The Patents and the Invention
Recentive Analytics (“Recentive ”), whose machine learning technology has been used by the National Football League (NFL) to set its schedule, alleged that Fox used infringing software to schedule its regional television broadcasts, including NFL games. 

Recentive owned four patents across two families:

  1. Machine Learning Training Patents (U.S. Patent Nos. 11,386,367 and 11,537,960) – focused on dynamically generating optimized schedules for live television broadcasts using machine learning models trained on historical data.
  2. Network Map Patents (U.S. Patent Nos. 10,911,811 and 10,958,957) – addressed the generation of “network maps” that determine how television programs are displayed on specific channels in designated geographic markets.

According to Recentive, the traditional manual methods used by broadcasters were crude and incapable of responding to real-time changes in viewer preferences. Their technology purportedly provided a solution through dynamic, machine-learning-based scheduling and map generation.

After being sued for infringement, Fox challenged the validity of the patents under § 101. The district court agreed and dismissed the claims, finding them directed to abstract ideas implemented with generic machine learning techniques.

The Federal Circuit’s Analysis
The Federal Circuit affirmed the lower court’s ruling, reinforcing its approach to § 101 jurisprudence with respect to AI-related claims. Judge Dyk, writing for the panel and noting that the case presented a question of first impression, approached the central issue as follows:

“Whether claims that do no more than apply established methods of machine learning to a new data environment are patent eligible.”

The panel answered this question, finding that such claims were not patent eligible. The panel emphasized that merely using AI or machine learning in a conventional way is not sufficient to convert an otherwise abstract idea into patent-eligible subject matter.

The Federal Circuit found fault with Recentive’s patents for the following reasons. 

1. Generic Use of Machine Learning
The claims did not seek to protect a new machine learning algorithm. Rather, they involved applying conventional machine learning models—described broadly as “any suitable machine learning technique”—to an existing problem in broadcast scheduling. The specifications and claims did not articulate any modification or advancement in the underlying technology. As a result, the use of machine learning was deemed “generic,” and therefore abstract.

2. Lack of Technological Improvement
Recentive argued that their inventions offered a technical solution to a technical problem by dynamically generating schedules and maps. However, the court found that features like iterative training and dynamic data updates are inherent to machine learning itself and do not reflect any technological advancement. Without details about how these outcomes were achieved through innovation, the claims fell short.

3. Insufficient Implementation Details
Critically, the Federal Circuit emphasized that the patents failed to provide implementation details that would distinguish the claims from a mere directive to apply machine learning. The absence of delineated steps or specific algorithms meant that the claims amounted to aspirational goals rather than technical instructions.

4. Field-of-Use Limitations
The court rejected the idea that applying AI to a novel domain—such as television scheduling— could rescue the claims. A field-of-use limitation is insufficient to render an abstract idea patent eligible. Merely moving generic AI into a different industry does not convert it into an inventive concept under § 101.

5. Speed and Efficiency Are Not Enough
Finally, the court dismissed arguments based on performance improvements. Speed and efficiency gains, without a corresponding technological breakthrough, do not transform an abstract idea into patent-eligible subject matter.

Comparison to Past Precedents
Recentive sought to analogize its claims to precedents where software patents were upheld:

  • In Enfish, LLC v. Microsoft Corp., claims were found eligible because they recited a specific improvement to computer database functionality.
  • In McRO, Inc. v. Bandai Namco Games America Inc., the use of rule-based automation for lip-syncing yielded a technological improvement.
  • In Koninklijke KPN N.V. v. Gemalto M2M GmbH, the claims addressed error detection in data transmission—a concrete technical advance.

The Federal Circuit rejected these comparisons, stating that Recentive’s patents lacked the detailed implementations and clear technological benefits present in those cases.

Instead, the court likened the patents to those in Electric Power Group, LLC v. Alstom S.A. and SAP Am., Inc. v. InvestPic, LLC, where the claims involved collecting and analyzing data without describing how the methods improved technology.

Alice Step Two: The Inventive Concept
Under Alice Corp. v. CLS Bank International, step two of the eligibility test asks whether the claims contain an “inventive concept” sufficient to transform the abstract idea into a patent-eligible application.

Recentive pointed to the use of real-time data, dynamic outputs, and machine learning as their inventive concept. The court was not persuaded. These features were considered part and parcel of what machine learning already does. Since there was nothing unconventional about their use, the claims failed Alice step two.

Implications for AI and Software Patents
This decision illustrates a broader trend in AI patent jurisprudence: courts remain skeptical of claims that rely on generic use of machine learning without articulating technological innovation. Importantly, the court left the door open for AI patents that improve the underlying algorithms or computer functionality—but it signaled that “do it using AI” will not suffice. This is not surprising given that the Supreme Court’s Alice decision held that generic claims reciting, in effect, “do it on a computer” are also not patent-eligible.

Attorneys drafting AI-related patent applications must therefore be vigilant in distinguishing true technological advancements from applications of known techniques.

Best Practices: Drafting Patent Applications to Survive § 101 Challenges
The Recentive decision underscores the importance of meticulous drafting when seeking patent protection for AI-driven innovations. Below are some best practices to improve the chances of success:

1. Claim a Specific Technological Improvement
Avoid merely reciting the use of machine learning or AI. Instead, clearly identify a novel technical feature or architecture. Demonstrate how the invention changes the way a computer operates or how the algorithm improves performance.

2. Describe the Innovation in Detail
Include specific implementation steps, data flows, and algorithmic mechanisms. Vague language such as “any suitable machine learning model” invites eligibility challenges. Provide concrete examples and explain how the result is achieved.

3. Differentiate from Conventional Methods
Show how the invention departs from prior art or conventional techniques. Highlight not only what the invention does but how it accomplishes it in a novel and non-obvious way.

4. Avoid Field-of-Use Limitations
Ensure the inventive concept is not limited to the application of generic technology in a new context. Field-specific applications are insufficient unless coupled with a unique technical implementation.

5. Include Technical Benefits in the Specification
Tie the benefits of the invention—such as reduced computational load, increased accuracy, or novel data processing—to concrete technical improvements. Avoid framing benefits solely in terms of business advantages or efficiency gains.

6. Claim Structurally—Not Functionally
Whenever possible, claim system components, data structures, and processes in structural or algorithmic terms rather than abstract functional language. Courts are more likely to uphold claims that describe specific arrangements and processes.

7. Use Dependent Claims Strategically
Include dependent claims that recite specific machine learning models, feature extraction methods, or training protocols. This helps in narrowing the scope of the claims while preserving eligibility under § 101.

Conclusion
The Recentive decision serves as a timely reminder that AI-driven innovations must be carefully framed to withstand eligibility scrutiny. Generic applications of machine learning are unlikely to survive § 101 challenges unless tied to specific, concrete technological improvements. As AI continues to evolve, so too must the strategies employed to protect it through intellectual property.

Patent practitioners must adapt by focusing not only on the novelty and utility of an invention, but on articulating the technical “how” in a way that the courts will find both meaningful and eligible.

****

Subscribe to get updates to this post or to receive future posts from PatentNext. Start a discussion or reach out to the author, Ryan Phelan, at rphelan@marshallip.com or 312-474-6607. Connect with or follow Ryan on LinkedIn.