Double Trouble – Apple’s Recent Legal Setbacks Highlight Key Lessons in Global IP Strategy

Apple Inc. continues to sit at the forefront of global innovation, but even the most sophisticated technology companies are not immune to complex legal challenges. In recent weeks, Apple has faced two significant intellectual property (IP) setbacks, one in the United States and one in the United Kingdom, each with far-reaching implications for companies navigating patent litigation, standards licensing, and global IP enforcement.

This blog examines two recent decisions that have put Apple’s IP practices under scrutiny: one involving the use of Applicant Admitted Prior Art (AAPA) in a U.S. case before the Federal Circuit, and another concerning royalty obligations for standard-essential patents (SEPs) in the UK.

Federal Circuit Reverses PTAB Decision: AAPA Misapplied

In April 2025, the U.S. Court of Appeals for the Federal Circuit overturned a favorable ruling for Apple by the Patent Trial and Appeal Board (PTAB). The case centered on Apple’s challenge to a patent using a combination of a printed publication and Applicant Admitted Prior Art (AAPA), statements made in the challenged patent’s own specification acknowledging the existence of certain prior art.

The PTAB had sided with Apple, holding that the combination was valid grounds to invalidate the claims. However, the Federal Circuit disagreed, clarifying that AAPA alone does not constitute “prior art consisting of patents or printed publications” as required under the America Invents Act (AIA) for inter partes review (IPR) proceedings. The court ruled that while AAPA may inform a skilled artisan’s understanding, it cannot be the primary basis for an obviousness challenge.

Implications:

  • Limits of IPR Strategy: Companies seeking to invalidate patents at the PTAB must ensure their arguments rely primarily on statutory prior art. Internal admissions, even when found in the patent under review, are not enough.
  • Importance of Procedural Precision: This case reinforces how procedural interpretation can outweigh substantive arguments. Understanding statutory language is critical to litigation success.
  • Drafting Risk Awareness: While not directly at issue in this case, the broader takeaway for patent applicants is to be cautious when characterizing prior art in their applications, as such language can be used in litigation, though with limits.
  • Increased Scrutiny of PTAB Practices: The ruling may prompt changes in how PTAB applies AAPA going forward, potentially raising the bar for IPR petitioners more broadly.

UK Court of Appeal Orders Apple to Pay $502 Million in FRAND Dispute
Just days later, Apple received another legal setback, this time from the UK Court of Appeal. On May 1, 2025, the court affirmed a judgment requiring Apple to pay $502 million to Optis Cellular Technology LLC for a global license to its 4G standard-essential patents. The case, which began when Optis sued Apple in 2019, centered on the appropriate amount Apple must pay under fair, reasonable, and non-discriminatory (FRAND) licensing obligations, which are required under global standards-setting agreements.

The decision dramatically increased the damages from the UK High Court’s 2023 estimate of just over $56 million which was made by the judge at the High Court of England and Wales without reliance on experts from either company. The UK Court of Appeals found that a lump-sum license more accurately reflected the global nature of Apple’s 4G usage and the market value of Optis’s portfolio when awarding $502 million based on a $0.15 per unit royalty. Apple had previously indicated that it would not accept a license on terms set by the UK court and may appeal this decision.

Implications for large IP holders and the broader IP landscape:

  • FRAND Licensing as a Global Risk: The case signals a shift in how courts outside the U.S. are willing to impose significant global licensing terms, even where the jurisdictional scope is limited.
  • Litigation Forum Strategy: SEP holders may increasingly look to the UK and other jurisdictions as favorable venues for global FRAND determinations.
  • Financial Exposure in SEP Disputes: The magnitude of the damages awarded suggests that SEP enforcement remains a serious financial risk for tech companies, especially those reliant on standard essential patents.

Strategic Takeaways for Technology Companies
Taken together, these rulings offer several lessons for companies navigating the increasingly complex world of IP litigation:

  • Global IP Planning is Essential: Legal decisions in one country can have global implications. Multinationals must anticipate and coordinate litigation strategies across multiple jurisdictions.
  • Proactive Legal Audits: Regular reviews of patent drafting practices and litigation exposure are crucial. Ensuring that internal admissions in patents does not open doors for unintended invalidity risks is now more important than ever.
  • Valuation and Licensing Readiness: As courts impose large-scale licensing obligations, companies must be prepared to defend or justify the value of their own and others’ patent portfolios, especially under FRAND regimes.

Conclusion
Apple’s recent legal setbacks illustrate the challenges even the most sophisticated companies face in managing global intellectual property. The Federal Circuit’s reversal and the UK’s expanded damages ruling in the Optis case serve as timely reminders that patent strategy must be tightly integrated with legal, technical, and business planning.

For consulting firms advising clients on IP strategy and valuation, these cases reinforce the value of forward-looking risk assessments, cross-border legal coordination, and ongoing patent portfolio management. As courts refine the rules around prior art and FRAND licensing, staying ahead of evolving jurisprudence will be key to maintaining competitive advantage and avoiding costly surprises.

Navigating the Intersection of Copyright and AI: Understanding Digital Replicas

In our last blog in this series on AI, we discussed how to identify and overcome AI Hallucinations when utilizing AI for business purposes. In today’s blog, we are touching on another issue that has been on the forefront of AI as the use of artificial intelligence has grown and new features have been added to the long list of capabilities; namely, digital replicas. These are AI-generated imitations of human voices, images, or appearances that are so realistic, they are often indistinguishable from the real thing. While these digital replicas offer exciting possibilities for creativity and innovation, they also present complex legal challenges, particularly concerning copyright and individual rights. In an ongoing effort to bring awareness to the potential issues with AI and to push lawmakers to address the growing potential threat with legal frameworks, the U.S. Copyright Office (USCO) in 2023 launched an initiative to examine copyright and policy issues raised by AI. Since launching this initiative, the USCO has received over 10,000 comments and is in the process of publishing a multi-part report addressing various topics and analyzing the issues, which will be published as they are completed. On July 31, 2024, the Office published Part 1 of the Report, which addresses the topic of digital replicas.

Understanding the Concept of Digital Replicas

Digital replicas refer to AI-generated content that mimics the voice, image, or appearance of a real person. These can range from AI-generated voices in music tracks to digital images used in movies or advertisements. The sophistication of AI technology has made it possible to create these replicas with minimal human intervention, raising concerns about authenticity, consent, and ownership. If you have ever interacted with these digital replicas, you would realize how powerful the technology is and the high level of risk associated with the creation of near-perfect copies of a person’s likeness, tone and manner of speaking if used in an unauthorized manner.

The Legal Landscape: Existing Protections and Gaps

The USCO’s report highlights the current legal frameworks addressing the protection against unauthorized digital replicas. These include:

  1. State Privacy and Publicity Laws: These laws offer some protection, particularly through rights of publicity and privacy. However, their effectiveness varies by state, and they often fall short of addressing the complexities introduced by AI-generated replicas.
  2. Federal Laws: The report discusses several federal laws, such as the Copyright Act, the Federal Trade Commission Act, and the Lanham Act, which provide some level of protection. Yet, these laws were not designed with AI in mind and thus may not fully cover the nuances of digital replicas.
  3. The Need for New Legislation: The report strongly advocates for the creation of new federal laws specifically designed to address the challenges posed by AI-generated digital replicas. It argues that existing laws are inadequate to protect individuals from unauthorized use of their likenesses or voices, particularly when such replicas can be easily created and distributed without consent.

The Impact on Creativity and the Arts

The proliferation of AI-generated digital replicas has sparked debates within the creative community. On the one hand, these technologies can be powerful tools for artists, enabling new forms of expression and creativity. On the other hand, they pose a threat to traditional forms of artistic labor, potentially displacing human artists and performers.

For example, in the music industry, AI-generated songs featuring the voices of well-known artists without their consent have already caused controversies. Similarly, in the film industry, the use of digital replicas for actors could lead to fewer opportunities for real actors, raising ethical and economic concerns. The counter to this argument is the ease at which non-artists are able to create custom works using tools like text-to-video which allows people like myself, with no artistic skills, to generate short videos and creative images with simple prompts. In a few years, it is likely that I could use a series of prompts to create a 2-hour custom movie with my son as the main character. Whether this stifles the industry or opens up new industries, in a similar way as people predicted when the internet reached average users, is the trillion dollar question.

Moving Forward: Balancing Innovation and Rights

As AI continues to evolve, so too must our legal frameworks. The USCO’s report emphasizes the importance of balancing technological innovation with the protection of individual rights. It calls for new federal legislation that would:

  • Provide clear guidelines on the use of digital replicas.
  • Protect both celebrities and private individuals from unauthorized exploitation of their likenesses.
  • Ensure that individuals retain control over their digital replicas, with the ability to license or refuse the use of their likeness.

Conclusion

The intersection of copyright law and AI is a rapidly developing area, with significant implications for both creators and consumers. The USCO’s report on digital replicas is a crucial step in addressing the legal challenges posed by AI-generated content. As we navigate this new frontier, it is essential to find a balance that promotes innovation while safeguarding individual rights and creative integrity. Unfortunately, the concerns by most in the industry will not be resolved through the publication of multi-part reports and will ultimately be determined by members of Congress or the judiciary with the latter being the most likely source of future guidance. One concern with this approach is that it is, by definition, reactionary if left to the judiciary. Many artists, designers and others in the creative arts will be required to be harmed before judicial intervention is realized. We have seen a recent example of this in the 2023 Writers Guild of America Strike which lasted nearly 150 days and was targeting issues pertaining to a variety of issues, one of which was the use of AI and ChatGPT and the threat of replacing artists as opposed to these being tools to facilitate research and script ideas.  If the US decides to lead the world in creating frameworks for the legal uses of AI, it is incumbent upon our elected representatives to take action based on the feedback and create these guidelines for the industry to follow which will allow the US take a leading position in the regulation of AI and the use of Digital Replicas.

Understanding AI Series: Tackling AI Hallucinations in Business

In our ongoing “Understanding AI” series, we explore the many facets of Artificial Intelligence (AI) and its implications for businesses. Having previously covered key AI terms and limitations, this follow-up blog delves into an issue that most are not aware exists: AI hallucinations.

What Are AI Hallucinations?

AI hallucinations occur when AI models generate information that appears plausible but is incorrect or nonsensical. This phenomenon can arise from various sources, such as biased training data or flawed algorithms. AI hallucinations pose significant challenges, especially in critical applications where accuracy and reliability are paramount. If you have used one of the many AI tools on the market, you may have noticed these results, especially if you have asked the AI tool to provide sources which is oftentimes where this issue will be easily identified.

The Impact of AI Hallucinations on Business

AI hallucinations can have far-reaching consequences in business, affecting decision-making, customer relations, and operational efficiency. Here’s how:

  1. Decision-Making:

Relying on AI-generated data that is inaccurate or misleading can lead to poor strategic decisions, potentially harming the business’s long-term prospects, alienating prospective clients and revealing an over reliance on tools that are not yet ready for mainstream business utilization.

  1. Customer Relations:

Inaccurate AI responses in customer service applications can frustrate customers and damage the company’s reputation.

  1. Operational Efficiency:

Incorrect data can disrupt supply chains, financial planning, and other critical business functions, leading to inefficiencies and increased costs.

Mitigating AI Hallucinations

To address the challenge of AI hallucinations, businesses should implement tangible strategies and have these strategies available for any employee utilizing AI tools. Here are key steps to consider:

  1. Data Quality:

Ensure the data used to train AI models is accurate, comprehensive, and free from biases. Regularly update and validate datasets to maintain their relevance and reliability.

  1. Algorithm Auditing:

Conduct thorough audits of AI algorithms to identify potential flaws or biases. Regularly review and update algorithms to improve their performance and reduce the risk of hallucinations.

  1. Human Oversight:

Integrate human oversight into AI-driven processes. Human experts should validate AI-generated outputs, especially in critical applications, to ensure their accuracy and reliability. For less critical tasks that do not necessitate an expert, question the tool and seek out sources to validate the information. It is important to recognize that even though the answer may sound accurate, once you dig below the surface and seek to confirm the findings, there is often data issues found or reliance on outdate or inaccurate source information.

  1. Explainable AI:

Utilize explainable AI techniques to understand how models arrive at their decisions. Transparency in AI decision-making processes helps build trust and allows for easier identification of errors.

  1. Continuous Monitoring:

Implement continuous monitoring systems to track AI performance in real-time. Detecting and addressing anomalies in real time can prevent the spread of incorrect information.

Real-World Examples

Several industries have successfully tackled AI hallucinations through innovative approaches:

  1. Healthcare:

In healthcare, AI is used for diagnostic purposes. To mitigate hallucinations, hospitals combine AI insights with expert reviews, ensuring that medical decisions are based on accurate information.

  1. Finance:

Financial institutions use AI for fraud detection and risk management. Continuous monitoring and regular audits of AI systems help maintain the integrity of financial data.

  1. Legal Services:
  • In a recent publication by Stanford University titled “Hallucination-Free? Accessing the Reliability of Leading AI Legal Research Tools,” the researchers explored the claims of current providers of legal research tools who claim to be hallucination-free. Their analysis identified that such claims are overstated and that the two major AI research tools used by the legal services industry hallucinate between 17% and 33% of the time.
  • The impact of not being aware of these high hallucination rates can be significant both to professional reputation as well as the outcome of a case. Many instances have been publicized and some attorneys have been sanctioned for citing to fictional cases that were provided by ChatGPT based tools.

Conclusion

AI hallucinations represent a significant challenge in the integration of AI into business operations. However, diligent efforts – as describe above – can help to mitigate some of the risks. By understanding and addressing the issue of AI hallucinations, businesses can better harness the full potential of AI.

Stay tuned for more insights in our “Understanding AI” series, where we continue to explore the evolving landscape of AI and its impact on various industries

 

 

 

Lynn’s Picks: Foresight’s Patent of the Week – US Granted Patent 11,983,958 Systems and Methods for Automated Makeup Application

Disclaimer: This blog was created for informational purposes only and does not represent Foresight’s or the author’s opinion regarding the validity, quality or enforceability of any particular patent covered in this blog.  Foresight is not a law firm and no portion of the information contained in this blog was intended to serve as legal opinion.

As a husband, I have spent a lot of time waiting for the makeup process to be completed before my wife and I are able to attend a function, date or generally leave the house. When I came across this patent, I was surprised that I had not seen a similar technology previously because one of the important features of automation and robotics has always been about simplifying routine tasks. Gemma robotics, the Israeli startup and holder of this patent, has a slogan on their website which states:

“We’re making Gemma because we love wearing makeup more than we love applying it”

This slogan is, or should be, the goal of consumer-focused robotics and other automation technologies, to simplify and more efficiently handle daily tasks. Moreover, it sheds light on what we can expect to see over the coming years; small form factor devices such as robotic vacuums and makeup application robots, as opposed to full humanoid robots such as the Optimus Robot which Tesla claims will be able to perform useful tasks in their factory before the end of 2024.

What’s Inside Patent No. 11,983,958?

This patent introduces a novel approach to makeup application where the system is used to record a face map, skin tone, facial features and preferences of the user. The user is then able to select from a wide variety of looks that have been preconfigured into the system. The system allows the user to see a preview of the selected look prior to the final selection. Once confirmed, the robot then calculates the right formula to give the desired look, mixes the formula and sprays the mixture through an airbrush nozzle onto the user’s face. In order to accomplish this, significant steps have to be taken by the system to determine the amount of makeup to mix, the sequence of application, the distance from the nozzle to the user’s face and the amount of force needed to apply the makeup. I believe the technology may be a bit early as there are no examples found on the company’s website of the system in operation and the final result; however, this patent was selected to bring the conversation back to the reason why robotic systems will soon be ubiquitous within households.

What Comes Next?

Makeup is one task that a large percentage of the population do on a daily basis and these routine, time consuming tasks, are what inventors should focus on solving using the increased ability of robotic systems and artificial intelligence. It is easy to get lost in the large format, life-like robots that you see at CES or in the news, but these systems are unlikely to achieve traction in the near future, whether that is due to cost or limitations on what these systems can perform. For consumers, the growth of robotic and AI technologies in the home should open up time for doing the things we want to do with our time. We already have examples of robotic vacuum cleaners that have since been expanded to robotic lawn mowers. However, the vast majority of homes do not have an operating robotic system to assist with daily tasks and that opens up a wide market for inventors to create relatively inexpensive systems to address a wide range of tasks that provide a significant return on investment with the most valuable assets we have, time. Over the next few weeks, this blog will feature new robotic and AI systems that address the consumer market’s need for help in these everyday tasks.

Have you come across any interesting patents you would like us to feature in future blogs or did you invent a technology you would like featured? Please send us an email at media@foresightvaluation.com or call our office at (650) 561-3374.

 

 

 

Proudly powered by WordPress | Foresight theme designed by thingsym