The Federal Trade Commission (FTC) continues to issue guidance on the use of generative artificial
intelligence (AI) and the potential regulatory scrutiny facing
companies and creators using these new tools in the market. While
the FTC has previously addressed issues such as exaggerating the use of AI in a product or the
potential deceptiveness related to deepfakes or synthetic media, the most recent
guidance focused on the impact of the use of generative AI in the
creation of content and related digital products arising from
third-party copyrighted content used in the training or ultimate
creative outputs.
As social media has rapidly become the primary source of both
how consumers interact with content and even purchase products, the
FTC has long scrutinized the potential harms to consumers in
interacting with content, products and marketing in social media
channels. In the context of consumer endorsements and native
advertising, the FTC has intensely focused on, and brought multiple
enforcement actions and issued warnings to companies, emphasizing
the importance of proper transparency to consumers. More
specifically, the FTC has focused on disclosure of endorsements
that consumers may rely on when purchasing products or interacting
with product reviews, and ensuring a consumer is well aware that
they are interacting or about to interact with advertising versus
editorial content. For example, in the native advertising context,
the FTC has stressed the concept of “deceptive door
openers,” which lead consumers to engage in viewing content or
purchasing activities before receiving all necessary
disclosures.
Striking a similar theme in the context of AI, the FTC is now
raising concerns with respect to the potential deceptive nature
that AI-generated content or products can have in the context of
consumer purchases and related activities. With respect to content,
for example, the FTC cautions that marketing songs generated by AI
as the work of specific recording artists or selling books written
by AI as the work of humans is likely inherently deceptive. The FTC
notes that this accords with their longstanding guidance that
“[c]ompanies are always obliged to ensure that customers
understand what they’re getting for their money.”
In this most recent guidance, the FTC not only emphasized
deception in the context of the traditional end consumer but also
expressed that content creators have reasonable expectations with
respect to their rights in the content they have created and how
such content can be used. In particular, the FTC noted that
unilateral changes to terms and conditions that change these
creators’ expectations could be deceptive. To that end, it is
important for a platform to adequately disclose changes to their
terms that have a material impact on creators and require
affirmative consent to those changes. The FTC asserts that it
“may take a close look if such a platform isn’t living up
to promises made to creators when they signed up to use it.”
Relatedly, companies often state that consumers can “buy”
digital products like books, music, movies and games, when they are
actually only granting them a limited, revocable license. Companies
should always help their consumers understand what they are paying
for and what they are receiving. While neither of these issues is
AI-specific, their relevance is resurfacing with the explosion of
generative AI, especially regarding the rapid-fire updating of
generative AI platforms’ terms of service.
Finally, the FTC cautions that generative AI tools trained on
copyrighted or protected material could raise issues rising to the
level of being unfair or deceptive, noting that this is
“especially true if companies offering the tools don’t
come clean about the extent to which outputs may reflect the use of
such material.” This is the most intriguing aspect of the FTC
guidance, in our view, because it adds yet another hurdle for those
who are building or using generative AI tools and LLMs to consider
(in addition to the copyright and related intellectual property
considerations that are still in flux in the courts). This
positioning by the FTC indicates that it is not just training on
user data without informing rightsholders that could be construed
as deceptive, rather that training on any protected
material without disclosing that fact to consumers may be deceptive
and in violation of Section 5 of the FTC Act. The FTC observes that
this information might inform a consumer’s decision to use one
tool over another. Companies building AI tools will have to wrestle
with the tension around telling customers whether and to what
extent their training data includes copyrighted or otherwise
protected material, which could potentially mitigate risk
of FTC enforcement on this concern but might potentially
increase the risk of litigation by rightsholders of that
training data.
If we extrapolate this further to the creative outputs of the AI
tools themselves, it is certainly possible that almost any use of
AI in any creative content or other creative output would similarly
need to be disclosed (e.g., #AIcreation) if a consumer thought they
were interacting with human-produced content or was otherwise
deceived by their interaction or purchase of products. The FTC has
already made clear in its recently updated guidance on endorsements
that any content created by a non-human influencer must be
disclosed.
Here are some key considerations to keep in mind:
- Making the material terms and conditions clear and
understandable to customers of a digital product or service,
including whether they’re buying an item or just getting a
license to use it. Unilaterally changing those terms or undermining
reasonable ownership expectations can get you in trouble too,
particularly if an end consumer or creator reasonably had different
expectations and the consumer is not properly notified and given
the opportunity to opt-in to the new terms. - As with physical counterfeit products, selling digital items
passed off as a particular human artist’s work is not permitted
if created by AI tools. This can also lead to additional
intellectual property and right of publicity risks as well. - Being transparent in a clear and conspicuous manner with users
of a creative AI tool regarding their usage and ownership rights of
the output, as well as how the resulting works will be used on the
platform, including whether they will be used for AI model training
to improve the platform’s products and services. - Consider whether it makes sense to disclose if, and to what
degree, a generative AI model was trained on datasets that include
copyrighted or otherwise protected material and if failing to do so
would be considered deceptive. - Platforms and creators will have to start to consider whether
outputs should also be disclosed or required to be disclosed as AI
created.
The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.