Navigating the Intersection of Copyright and AI: Understanding Digital Replicas

In our last blog in this series on AI, we discussed how to identify and overcome AI Hallucinations when utilizing AI for business purposes. In today’s blog, we are touching on another issue that has been on the forefront of AI as the use of artificial intelligence has grown and new features have been added to the long list of capabilities; namely, digital replicas. These are AI-generated imitations of human voices, images, or appearances that are so realistic, they are often indistinguishable from the real thing. While these digital replicas offer exciting possibilities for creativity and innovation, they also present complex legal challenges, particularly concerning copyright and individual rights. In an ongoing effort to bring awareness to the potential issues with AI and to push lawmakers to address the growing potential threat with legal frameworks, the U.S. Copyright Office (USCO) in 2023 launched an initiative to examine copyright and policy issues raised by AI. Since launching this initiative, the USCO has received over 10,000 comments and is in the process of publishing a multi-part report addressing various topics and analyzing the issues, which will be published as they are completed. On July 31, 2024, the Office published Part 1 of the Report, which addresses the topic of digital replicas.

Understanding the Concept of Digital Replicas

Digital replicas refer to AI-generated content that mimics the voice, image, or appearance of a real person. These can range from AI-generated voices in music tracks to digital images used in movies or advertisements. The sophistication of AI technology has made it possible to create these replicas with minimal human intervention, raising concerns about authenticity, consent, and ownership. If you have ever interacted with these digital replicas, you would realize how powerful the technology is and the high level of risk associated with the creation of near-perfect copies of a person’s likeness, tone and manner of speaking if used in an unauthorized manner.

The Legal Landscape: Existing Protections and Gaps

The USCO’s report highlights the current legal frameworks addressing the protection against unauthorized digital replicas. These include:

  1. State Privacy and Publicity Laws: These laws offer some protection, particularly through rights of publicity and privacy. However, their effectiveness varies by state, and they often fall short of addressing the complexities introduced by AI-generated replicas.
  2. Federal Laws: The report discusses several federal laws, such as the Copyright Act, the Federal Trade Commission Act, and the Lanham Act, which provide some level of protection. Yet, these laws were not designed with AI in mind and thus may not fully cover the nuances of digital replicas.
  3. The Need for New Legislation: The report strongly advocates for the creation of new federal laws specifically designed to address the challenges posed by AI-generated digital replicas. It argues that existing laws are inadequate to protect individuals from unauthorized use of their likenesses or voices, particularly when such replicas can be easily created and distributed without consent.

The Impact on Creativity and the Arts

The proliferation of AI-generated digital replicas has sparked debates within the creative community. On the one hand, these technologies can be powerful tools for artists, enabling new forms of expression and creativity. On the other hand, they pose a threat to traditional forms of artistic labor, potentially displacing human artists and performers.

For example, in the music industry, AI-generated songs featuring the voices of well-known artists without their consent have already caused controversies. Similarly, in the film industry, the use of digital replicas for actors could lead to fewer opportunities for real actors, raising ethical and economic concerns. The counter to this argument is the ease at which non-artists are able to create custom works using tools like text-to-video which allows people like myself, with no artistic skills, to generate short videos and creative images with simple prompts. In a few years, it is likely that I could use a series of prompts to create a 2-hour custom movie with my son as the main character. Whether this stifles the industry or opens up new industries, in a similar way as people predicted when the internet reached average users, is the trillion dollar question.

Moving Forward: Balancing Innovation and Rights

As AI continues to evolve, so too must our legal frameworks. The USCO’s report emphasizes the importance of balancing technological innovation with the protection of individual rights. It calls for new federal legislation that would:

  • Provide clear guidelines on the use of digital replicas.
  • Protect both celebrities and private individuals from unauthorized exploitation of their likenesses.
  • Ensure that individuals retain control over their digital replicas, with the ability to license or refuse the use of their likeness.

Conclusion

The intersection of copyright law and AI is a rapidly developing area, with significant implications for both creators and consumers. The USCO’s report on digital replicas is a crucial step in addressing the legal challenges posed by AI-generated content. As we navigate this new frontier, it is essential to find a balance that promotes innovation while safeguarding individual rights and creative integrity. Unfortunately, the concerns by most in the industry will not be resolved through the publication of multi-part reports and will ultimately be determined by members of Congress or the judiciary with the latter being the most likely source of future guidance. One concern with this approach is that it is, by definition, reactionary if left to the judiciary. Many artists, designers and others in the creative arts will be required to be harmed before judicial intervention is realized. We have seen a recent example of this in the 2023 Writers Guild of America Strike which lasted nearly 150 days and was targeting issues pertaining to a variety of issues, one of which was the use of AI and ChatGPT and the threat of replacing artists as opposed to these being tools to facilitate research and script ideas.  If the US decides to lead the world in creating frameworks for the legal uses of AI, it is incumbent upon our elected representatives to take action based on the feedback and create these guidelines for the industry to follow which will allow the US take a leading position in the regulation of AI and the use of Digital Replicas.

Understanding AI Series: Tackling AI Hallucinations in Business

In our ongoing “Understanding AI” series, we explore the many facets of Artificial Intelligence (AI) and its implications for businesses. Having previously covered key AI terms and limitations, this follow-up blog delves into an issue that most are not aware exists: AI hallucinations.

What Are AI Hallucinations?

AI hallucinations occur when AI models generate information that appears plausible but is incorrect or nonsensical. This phenomenon can arise from various sources, such as biased training data or flawed algorithms. AI hallucinations pose significant challenges, especially in critical applications where accuracy and reliability are paramount. If you have used one of the many AI tools on the market, you may have noticed these results, especially if you have asked the AI tool to provide sources which is oftentimes where this issue will be easily identified.

The Impact of AI Hallucinations on Business

AI hallucinations can have far-reaching consequences in business, affecting decision-making, customer relations, and operational efficiency. Here’s how:

  1. Decision-Making:

Relying on AI-generated data that is inaccurate or misleading can lead to poor strategic decisions, potentially harming the business’s long-term prospects, alienating prospective clients and revealing an over reliance on tools that are not yet ready for mainstream business utilization.

  1. Customer Relations:

Inaccurate AI responses in customer service applications can frustrate customers and damage the company’s reputation.

  1. Operational Efficiency:

Incorrect data can disrupt supply chains, financial planning, and other critical business functions, leading to inefficiencies and increased costs.

Mitigating AI Hallucinations

To address the challenge of AI hallucinations, businesses should implement tangible strategies and have these strategies available for any employee utilizing AI tools. Here are key steps to consider:

  1. Data Quality:

Ensure the data used to train AI models is accurate, comprehensive, and free from biases. Regularly update and validate datasets to maintain their relevance and reliability.

  1. Algorithm Auditing:

Conduct thorough audits of AI algorithms to identify potential flaws or biases. Regularly review and update algorithms to improve their performance and reduce the risk of hallucinations.

  1. Human Oversight:

Integrate human oversight into AI-driven processes. Human experts should validate AI-generated outputs, especially in critical applications, to ensure their accuracy and reliability. For less critical tasks that do not necessitate an expert, question the tool and seek out sources to validate the information. It is important to recognize that even though the answer may sound accurate, once you dig below the surface and seek to confirm the findings, there is often data issues found or reliance on outdate or inaccurate source information.

  1. Explainable AI:

Utilize explainable AI techniques to understand how models arrive at their decisions. Transparency in AI decision-making processes helps build trust and allows for easier identification of errors.

  1. Continuous Monitoring:

Implement continuous monitoring systems to track AI performance in real-time. Detecting and addressing anomalies in real time can prevent the spread of incorrect information.

Real-World Examples

Several industries have successfully tackled AI hallucinations through innovative approaches:

  1. Healthcare:

In healthcare, AI is used for diagnostic purposes. To mitigate hallucinations, hospitals combine AI insights with expert reviews, ensuring that medical decisions are based on accurate information.

  1. Finance:

Financial institutions use AI for fraud detection and risk management. Continuous monitoring and regular audits of AI systems help maintain the integrity of financial data.

  1. Legal Services:
  • In a recent publication by Stanford University titled “Hallucination-Free? Accessing the Reliability of Leading AI Legal Research Tools,” the researchers explored the claims of current providers of legal research tools who claim to be hallucination-free. Their analysis identified that such claims are overstated and that the two major AI research tools used by the legal services industry hallucinate between 17% and 33% of the time.
  • The impact of not being aware of these high hallucination rates can be significant both to professional reputation as well as the outcome of a case. Many instances have been publicized and some attorneys have been sanctioned for citing to fictional cases that were provided by ChatGPT based tools.

Conclusion

AI hallucinations represent a significant challenge in the integration of AI into business operations. However, diligent efforts – as describe above – can help to mitigate some of the risks. By understanding and addressing the issue of AI hallucinations, businesses can better harness the full potential of AI.

Stay tuned for more insights in our “Understanding AI” series, where we continue to explore the evolving landscape of AI and its impact on various industries

 

 

 

Understanding AI Series: Key Aspects of Artificial Intelligence Every Business Needs to Know


“… we’re smart enough to invent it and dumb enough to need it. And still so stupid we can’t figure out if we did the right thing.” (Jerry Seinfeld’s comments on AI, Duke Commencement Speech, May 12, 2024)

In this series, we dive into the world of Artificial Intelligence (AI) using various AI tools to provide information on how to view AI as a tool and what to look for when utilizing various AI tools to improve the results. This series is designed to rely on AI tools to provide the roadmap to exploring deeper topics and concerns with the increasing use of AI across a wide range of industries. In an effort to start this series with the basics, this first blog highlights Key terms, Limitations, and things to watch for when using AI tools.

AI is transforming the way businesses operate, offering tools that can enhance productivity, optimize decision-making, and drive innovation. However, understanding the key aspects of AI is essential for leveraging its full potential while being aware of its limitations. In this blog, we’ll explore important AI terms, limitations, and what business users should watch for when integrating AI into their operations. It should be noted that this series will rely on multiple AI tools to develop the content and we will highlight the tools used. In this blog, we conducted the analysis using ChatGPT-4o and Microsoft CoPilot.

Key AI Terms to Know

  1. Artificial Intelligence (AI):

AI refers to the simulation of human intelligence in machines that are programmed to think and act like humans. It encompasses various technologies, such as machine learning, natural language processing, and robotics.

  1. Machine Learning (ML):

A subset of AI, machine learning enables systems to learn from data and improve their performance over time without being explicitly programmed. ML algorithms can analyze large datasets to identify patterns and make predictions or decisions.

  1. Neural Networks:

Neural networks are computational models inspired by the human brain’s structure and function. They consist of interconnected nodes (neurons) organized in layers, widely used in deep learning algorithms to process complex data.

  1. Deep Learning:

A subset of machine learning, deep learning utilizes neural networks with many layers (deep neural networks) to extract high-level features from raw data. It has achieved remarkable success in tasks such as image recognition, natural language processing, and speech recognition.

  1. Natural Language Processing (NLP):

NLP focuses on the interaction between computers and humans through natural language. It enables machines to understand, interpret, and generate human language, facilitating tasks such as language translation, sentiment analysis, and chatbots.

  1. Supervised Learning:

Supervised learning involves training a model on labeled data, meaning the input data is paired with corresponding output labels. The model learns to make predictions or decisions by generalizing from the labeled training examples.

  1. Unsupervised Learning:

In unsupervised learning, the model is trained on unlabeled data, aiming to find hidden patterns or structures in the data without explicit guidance.

  1. Reinforcement Learning:

An agent learns to interact with an environment by taking actions and receiving feedback in the form of rewards or penalties. The goal is to maximize cumulative reward over time by learning optimal strategies through trial and error.

  1. Algorithm Bias:

Systematic errors or prejudices present in AI algorithms can lead to unfair or discriminatory outcomes. Bias can arise from biased training data, flawed algorithm design, or biased decision-making processes.

  1. Ethical AI:

Ethical AI involves the responsible development, deployment, and use of AI technologies in accordance with ethical principles and values. It addresses concerns related to fairness, transparency, accountability, privacy, and societal impact.

  1. Large Language Models (LLMs):

LLMs are advanced NLP models, such as OpenAI’s GPT-3, that are trained on vast amounts of text data. These models can generate human-like text, understand context, and perform a wide range of language-related tasks, from translation to summarization.

  1. Artificial General Intelligence (AGI):

AGI refers to a level of AI where machines possess the ability to understand, learn, and apply knowledge across a wide range of tasks at a level comparable to human intelligence. AGI remains a theoretical concept and has not yet been achieved.

 

Limitations of AI Tools

While AI offers significant advantages, it is important to recognize its limitations:

  1. Data Dependency:

AI systems rely heavily on data for training. Poor-quality, biased, or insufficient data can lead to inaccurate or biased results.

  1. Complexity and Cost:

Developing and implementing AI solutions can be complex and costly. It requires specialized expertise and substantial computational resources.

  1. Lack of Transparency:

Some AI models, particularly deep learning models, operate as “black boxes,” making it difficult to understand how they arrive at specific decisions.

  1. Ethical Concerns:

AI systems can perpetuate existing biases and inequalities present in training data. Ensuring ethical use and fairness in AI applications is a significant challenge.

  1. Overfitting:

AI models can sometimes learn the training data too well, including noise and outliers, which can lead to poor performance on new, unseen data.

 

What to Watch For When Relying on AI

  1. Quality of Data:

Ensure that the data used to train AI models is accurate, representative, and free from biases. Regularly update and refine datasets to maintain the model’s relevance and accuracy.

  1. Ethical Considerations:

Implement ethical guidelines and frameworks to govern the use of AI in your business. Regularly audit AI systems to identify and mitigate any biases or unfair practices.

  1. Transparency and Explainability:

Strive for transparency in AI models and decisions. Where possible, use explainable AI techniques to understand how models make decisions and to build trust with stakeholders.

  1. Human Oversight:

AI should complement human decision-making, not replace it. Ensure that there is human oversight to validate and interpret AI-generated insights and decisions.

  1. Regulatory Compliance:

Stay informed about relevant regulations and standards related to AI and data privacy. Ensure that your AI practices comply with legal and ethical requirements.

  1. Continuous Monitoring and Improvement:

AI models should be continuously monitored for performance and accuracy. Regularly update models with new data and refine them to adapt to changing conditions. 

  1. Hallucinations:

Be aware of AI hallucinations, where models generate information that seems plausible but is incorrect or nonsensical. Always verify AI-generated outputs, especially in critical applications, to ensure accuracy and reliability.

 Conclusion

Artificial Intelligence holds immense potential for businesses, but understanding its key aspects, limitations, and ethical considerations is crucial. By being informed and vigilant, businesses can effectively integrate AI into their operations, driving innovation and achieving sustainable growth. Embrace AI as a powerful tool, but always keep in mind the responsibility that comes with its use. In future blog posts, we will dive deeper into some of the topics referenced above as well as other topics that present themselves in the quickly evolving AI landscape.

Redefining Inventorship: The USPTO’s Path Forward for AI-Assisted Inventions

On February 13, 2024 the United States Patent and Trademark Office (USPTO) released examination guidance pursuant to the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” The process started back in 2019 when the USPTO issued a request for public comment on patenting Artificial Intelligence (AI)-assisted inventions which was followed by a report in 2020 summarizing the various viewpoints found in the public comments received. In 2023, the USPTO issued a follow-up request for public comment focused on the issue of inventorship of AI or machine generated inventions. Following additional sessions to hear from various parties and the public in general, the USPTO determined that it would provide guidance regarding inventorship and patentability of AI-assisted inventions. Moreover, President Biden issued the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” in October of 2023 which set out a number of policies and principles that would allow the US to lead in AI while also promoting responsible innovation, competition and collaboration so that a fair, open, and competitive ecosystem and marketplace for AI technologies could be created in a manner that would drive continued innovation in the AI field. The Executive Order required the USPTO to publish guidance to patent examiners and applicants addressing inventorship and the use of AI which was released on the 13th of February, 2024.

As a background on the history of inventorship and AI, the genesis of the question of AI as an inventor started with the USPTO’s decision in April of 2020 denying petitions to name the AI system DABUS as an inventor on two patent applications. The reasoning behind this decision was simple: current U.S. patent laws limit inventorship to a natural person. This seemingly simple decision was then upheld by the U.S. District Court for the Eastern Division of Virginia in September of 2021. This decision was later appealed and the Federal Circuit in Thaler v. Vidal affirmed the underlying reasoning that only a natural person can be an inventor. However, in this decision the court specifically noted that it was not deciding on the related question of whether inventions made by natural persons with the assistance of AI are eligible for patent protection. However, the basis for the decision on inventorship should equally apply to co-inventors when one of the inventors is a natural person and the other is AI. In that situation, a co-invention made by a natural person and an AI would not be eligible for patent protection due to improper inventorship. This position by the USPTO that any patent application listing a machine as an inventor will be rejected due to improper inventorship led to the question of whether any invention that is AI-assisted would be eligible for patent protection.

The guidance addresses the above question by building upon previously established case law derived from Pannu v. Lolab Corp which focused on the level of contribution by the natural person to the claimed invention. The underlying concept as explained by the guidance is that while “AI systems cannot be listed inventors, the use of an AI system by a natural person does not preclude a natural person from qualifying as an inventor if the natural person significantly contributed to the claimed invention.” Interestingly, the guidance and the current structure of patent applications do not provide a mechanism to list or attribute the contribution of AI tools to the claimed invention, even if the AI systems were instrumental in the creation of the invention, due to the focus on current patent law and juridical precedent on natural persons. Since AI is not a natural person, there is no requirement to list the tool used in the creation of the claimed invention. This may be an area where Congress needs to act to enable a requirement to attribute non-natural person contributions to a claimed invention. The basis for this change would be to align patent disclosure policies with the stated policy goals of the Executive Order of promoting responsible innovation, competition and collaboration so that a fair, open, and competitive ecosystem and marketplace for AI technologies could be created in a manner that would drive continued innovation in the AI field.

As emphasized above, the central question or test under the new guidance can largely be boiled down to the “Significant Contribution” test. Historically, this test was used to determine whether an inventor must be included as a named inventor due to his/her significant contribution to the claimed invention. The guidance uses historical references to joint inventorship principles where the inventors may apply for a patent jointly, “even though (1) they did not physically work together or at the same time, (2) each did not make the same type or amount of contribution, or (3) each did not make a contribution to the subject matter of every claim of the patent.” Instead, each inventor must contribute in some significant manner to the invention, and these contributions have been tested by factors such that each inventor must: “(1) contribute in some significant manner to the conception or reduction to practice of the invention, (2) make a contribution to the claimed invention that is not insignificant in quality, when that contribution is measured against the dimension of the full invention, and (3) do more than merely explain to the real inventors well-known concepts and/or the current state of the art.” These factors were derived from the Pannu case referenced above and in the event that an inventor fails to meet any one of these factors, that inventor should not be named as a listed inventor on the patent.

What do these factors have to do with inventorship in AI-assisted inventions? In the AI-assisted invention context, the natural person must contribute significantly to the invention pursuant to these Pannu factors. If the natural person fails any one of these factors, the natural person is precluded from being a listed inventor as is the AI system resulting in an application that is rejected for improper inventorship. When viewing this from the perspective of a single inventor using AI, the single inventor must significantly contribute to each claim in the patent application. If the AI tool contributes solely to one or more claims, this would violate the Pannu factors as there would be no natural person contributing to those particular claims and an AI system cannot be a listed inventor. The complexity of this historical precedent being applied to AI assisted inventions led the USPTO to outline Guiding Principles to assist applications and USPTO personnel in determining proper inventorship. These Guiding Principles are:

  1. A natural person’s use of an AI system in creating an AI-assisted invention does not negate the person’s contributions as an inventor so long as the natural person contributes significantly to the AI-assisted invention.
  2. A natural person who only presents a problem to an AI system may not be a proper inventor of an invention identified from the output of the AI system. However, a significant contribution could be shown by the way the person constructs the prompt in view of a specific problem to elicit a particular solution from the AI system.
  3. Reducing an invention to practice alone is not a significant contribution that rises to the level of inventorship. Therefore, a natural person who merely recognizes and appreciates the output of an AI system as an invention is not necessarily an inventor. However, a person who takes the output of an AI system and makes a significant contribution to the output to create an invention may be a proper inventor.
  4. A natural person who develops an essential building block from which the claimed invention is derived may be considered to have provided a significant contribution to the conception of the claimed invention even though the person was not present for or a participant in each activity that led to the conception of the claimed invention. In some situations, the natural person who designs, builds, or trains an AI system in view of a specific problem to elicit a particular solution could be an inventor, where the designing, building, or training of the AI system is a significant contribution to the invention created with the AI system.
  5. Ownership of an AI system does not, on its own, make a person an inventor of any inventions created through the use of the AI system. Therefore, a person simply owning or overseeing an AI system that is used in the creation of an invention, without providing a significant contribution to the conception of the invention, does not make that person an inventor.

It should be noted that the USPTO guidance highlights that there is “no bright-line test” in determining whether a natural person’s contribution in AI-assisted inventions is significant. This should come as no surprise to anyone who has been in the world of patents for more than a few years. While the USPTO tries to provide the best guidance possible, the ambiguities often times leave more uncertainty than most practitioners would like to see resulting in a period of uncertain outcomes in both patent prosecution and litigation. As the AI field grows and capabilities that we cannot yet image are developed on top of AI systems, this is surely going to be an area of active litigation and developing case law.

 

Proudly powered by WordPress | Foresight theme designed by thingsym