A question on many clients’ minds lately is whether or not they can incorporate assets generated by large language model (“LLM”) artificial intelligence (“AI”), or “generative AI,” into their games, videos, or other projects. This subject is a very hot topic these days and the conversation is constantly evolving. But for now, here are some of the main reasons you should be hesitant (or at least very careful) when considering the use of LLM AI-generated assets in your project.
Other sources explain better than I can what LLM AI is and how it works.[i] The gist, however, is that the AI examines sources fed to it by its developers and users, or in some cases the Internet at large. Based on that input it learns associations between words/phrases and other content, analyzes the association of parts of the content itself on a bit-by-bit level, and attempts to guess what an expected answer to a prompt may be by reformulating the content it has learned. Although the tool is guided by rules the AI company’s developers put in place, their influence over what the AI does in specific instances is limited, as is their insight into how it is actually generating output. As a result, an AI answer may produce an inaccurate or inadequate response to a prompt, or as is most relevant here, a response that directly copies large chunks of material from the sources it learned from, rather than merely using those components as stylistic “inspiration.”
AI tools can already generate the kinds of files that comprise game assets—text, images, 3D models, audio, video, and code—with some success, and these tools are constantly being improved upon. Although useful tools are not available for all of those categories, some are expected to be released soon, such as OpenAI’s Sora tool.[ii] Some game studios are already using AI in their development process.[iii]
But despite recent advances in AI technology, there are still several reasons that you, as a developer or video-maker, should be careful about running off to embrace that technology.
1. AI-generated assets you incorporate into your project are not currently copyrightable
LLM AI-generated output is not copyrightable in the United States (for the time being at least). An ongoing lawsuit involves an attempt by an AI activist, Steven Thaler, to register an AI-created visual work—with the application stating that the work was a work made for hire with the AI program was the author and Thaler as the owner. The U.S. Copyright Office rejected the application for lack of human authorship. After requesting reconsideration, Thaler sued in the U.S. District Court for the District of Columbia challenging the decision as arbitrary, capricious, or abuse of discretion under the federal Administrative Procedure Act.[iv]
The work, titled “A Recent Entrance to Paradise”:

The District Court held that “[t]he Register did not err in denying the copyright registration application presented by plaintiff. United States copyright law protects only works of human creation.”[v]
In reaching that holding, the court distinguished the 1884 U.S. Supreme Court decision upholding the addition of photography to the Copyright Act as constitutional, based on the fact that that decision had reasoned that photographs merely reproduce an image of what is in front of the camera and the “author,” rather than the camera itself, poses the subject, selects and arranges the specific elements in the scene, and otherwise crafts the overall image. As the court stated, “[h]uman involvement in, and ultimate creative control over, the work at issue was key to the conclusion that th[at] new type of work fell within the bounds of copyright. Copyright has never stretched so far… as to protect works generated by new forms of technology operating absent any guiding human hand… Human authorship is a bedrock requirement of copyright.”[vi] The court said of Thaler’s work-for-hire argument that the doctrine concerned the question of “to whom a valid copyright should have been registered,” and was irrelevant to the question of basic copyrightability.[vii] [viii]
(Other countries have taken other approaches to this, but this post is not meant to be comprehensive and will leave most discussion of international variations for another day.) [ix]
In light of this, anything you make with generative AI and incorporate into your project would be in the public domain, and anyone could be free to take it and use it. If you registered a project that incorporated AI output with the Copyright Office, you would be required to disclaim that material in your application.[x]
2. The way some AI tools farm the internet for material could expose you to liability without your having any control over it.
The content that generative AI tools outputs can potentially get you into all kinds of trouble for republishing it in your project, because you don’t know where the AI tool may be getting that content from.
Again, AI tools “learn” content that is fed into them, and many of those tools have learned how to do what they do by trawling select sites (or the internet at large) for web 2.0 content posted by users, or from users feeding material directly to the AI tool. Both of these sources may include a vast amount of proprietary or problematic material. Complicating matters, the AI tools’ processes for generating output are a sort of black box that neither users nor even the tool’s developers have much insight into—the result being that AI tools may output material that violates the other people’s rights, among other issues. There is already a growing number of lawsuits against AI companies and users who publish AI-generated works concerning violation of rights of third parties.[xi] What follows are a few of the ways that the AI-generated output can potentially get you in trouble for publishing it in your project.
Copyright Infringement: AI-generated output may include a third party’s copyrighted material (images, text, story and characters, code, audio, 3D models, etc.) that you aren’t authorized to use. You may be liable for copyright infringement if the AI tool had access to the third party’s work and the AI output incorporated into your project is “substantially similar” to the third party’s. [xii]
Fair use is not likely to save you in that situation. It is an affirmative defense, meaning it would be your burden to prove, and it would depend on an unpredictable balancing of “the purpose and character” of your use of the third party’s material in your project, “the nature of the [] work [being infringed],” the amount of the third party’s material used, and the effect that your use has on the potential market for the third party’s work—the first and last factors being the most important and weighing against each other.[xiii] The first factor concerns whether the AI output’s use has a different or transformative purpose and character when compared to the original work, which is “a matter of degree” and must go beyond the differences that would qualify as a derivative work.[xiv] This standard could be difficult to meet. “[M]erely recontextualizing the original expression by ‘plucking the most visually arresting excerpts’ of the copyrighted work is not transformative.”[xv]
Copyright infringement can be a problem even when merely using generative AI as a starting point. For example, suppose you asked an AI image program to draw you concept art for a character in your project, and gave it some general character attributes for the character: “roguish adventurer who is a white man with brown hair and stubble who uses guns.” Because the AI’s machine learning works by making associations, it is possible that it might provide you with a picture that—even if it has minor differences—is clearly recognizable as Nathan Drake from the Uncharted game series, because it has learned that those descriptors apply to him. And suppose you weren’t familiar with Uncharted—you might go and start creating assets based on that concept art for your own project, and ultimately you will have infringed on Naughty Dog’s copyright in the character of Nathan Drake.[xvi] [xvii]
Trademark Infringement: AI-generated output may include a third party’s trademarks that you aren’t authorized to use.[xviii] By including output with those trademarks in your project, you may be infringing on their trademark rights if the presence of those trademarks is likely to make people think that the third party endorsed your game, causing consumer confusion or diluting the strength of their brand. Even though fair use standards for trademark differ from copyright, it would still likely not provide a defense as the various fair use exceptions require you to have used the trademark for a legitimate purpose and that consumer confusion is not so great as to outweigh that purpose. This would be difficult if your use was inadvertent in the first place and therefore purposeless.
Right of publicity / misappropriation of likeness: AI-generated output may also include someone else’s likeness (chiefly image or voice) that you aren’t authorized to use. A person’s likeness—including their image, name, voice, signature, etc.—is protected under federal and state law to varying degrees and cannot normally be used in products or advertising without consent.[xix]
Additionally, some AI tools may collect some material that is not even publicly available, such as when users input their own private data or trade secrets. Depending on the kind of information involved, if this information ended up in your AI output it could theoretically trigger various laws requiring you to restrict distribution of that information or otherwise handle it in special ways.
3. The Law Affecting AI Is Currently Being Litigated and Legislated and Could Change
The laws are still in a state of significant flux regarding AI. The U.S. Copyright Office has only recently been prompted to decide AI output’s copyrightability on applications like the one submitted by Thaler, and it is constantly issuing new guidance on the related issues and has solicited public comments on generative AI use to further develop policy.[xx] The Copyright Office has also indicated that it may consider AI-generated work copyrightable where it is the result of the author’s “own mental conception to which the author gave visible form” rather than mere “mechanical reproduction” on a case-by-case basis.[xxi] And Thaler case is currently on appeal to the U.S. Court of Appeals for the D.C. Circuit.[xxii]
Meanwhile, the U.S. Congress and state governments are also considering legislation to regulate AI while multiple federal executive branch agencies have been issuing their own regulatory guidance that may affect how AI material can be used in a project.[xxiii] Other countries around the world are likewise considering or have already issued new statutes or regulations. New laws may or may not ultimately include changes affecting AI copyrightability and infringement in AI output, though they are currently geared more toward regulating generative AI for purposes of public safety, security, and trust in information.[xxiv] Some proposed laws may also require you to disclose that AI was used in your project or limitations on how it may be used.
4. Use May Be Subject to the AI Company’s License and You Have to Comply With Its Terms
When you use an AI tool, your use is also subject to the AI company’s developer’s terms of use, which is a private contract with that company.[xxv] And these terms may restrict what you are allowed to do with the AI output.
Although a handful of companies creating AI tools, like Microsoft, have offered indemnification to users covering liability arising from output generated by their tools, the indemnification is subject to various limitations. For example, Microsoft’s Copilot Copyright Commitment provides that it only applies to commercial customers, and for those customers Microsoft will indemnify them against third-party “claims based on copyright, patent, trademark, trade secrets, or right of publicity,” but it specifically disclaims “claims based on trademark use in trade or commerce, defamation, false light, or other causes of action that are not related to IP rights” and it also excludes coverage where the customer used input infringing materials or “attempt[ed] to generate infringing materials.”[xxvi] This vague and generalized provision leaves a lot of questions about when Microsoft would or would not indemnify you against claims concerning your use of their AI’s output in your project.[xxvii]
5. Moral and creative bankruptcy concern – use of AI generated assets
Finally, there are the non-legal concerns about using AI generated assets, namely that the practice is creatively bankrupt and ethically problematic. By its very nature it depends on harvesting other people’s original creations to train the AI in the first place.[xxviii] And then the AI output effectively robs another creative person of a job on your project and trivializes the act of creation—part of what, arguably, makes us human in the first place. [xxix]
Takeaway
At this time, I generative AI is best suited as a toy, and I would discourage using it with respect to creative projects you intend to publish for anything other than a reference or placeholder, due to the issues outlined above. But if you are dead set on using it, you should be very aware of the risks involved in incorporating its output into your project. You should actively check its output for matches with any preexisting works that can be located on the Internet. You should try to mitigate risks by only using AI tools that come with some kind of contractual indemnification against third party claims. You should be careful that any contractors or employees you work with are not using AI without your knowledge and include warranties and indemnification in their agreements to cover you in case they do use it. And you should consult a lawyer before beginning or releasing any project that incorporates
[i] Timothy B. Lee and Sean Trott, A jargon-free explanation of how AI large language models work, ArsTechnica (July 31, 2023), https://arstechnica.com/science/2023/07/a-jargon-free-explanation-of-how-ai-large-language-models-work/ .
[ii] Cade Metz, OpenAI Unveils A>I. That Instantly Generates Eye-Popping Videos, N.Y. Times (Feb. 14, 2024), https://www.nytimes.com/2024/02/15/technology/openai-sora-videos.html .
[iii] Kyle Orland, Game developer survey: 50% work at a studio already using generative AI tools, ArsTechnica (January 18, 2024), https://arstechnica.com/gaming/2024/01/game-developer-survey-50-work-at-a-studio-already-using-generative-ai-tools/ .
[iv] Thaler v. Perlmutter, No. 22-CV-1564-BAH, 2023 U.S. Dist. LEXIS 145823, at *3-5, 2023 WL 5333236, 2023 U.S.P.Q.2D (BNA) 980, __ F.Supp.3d __, (D.D.C. Aug. 18, 2023).
[v] Thaler, 2023 U.S. Dist. LEXIS 145823, at *3-5.
[vi] Thaler, 2023 U.S. Dist. LEXIS 145823, at *3-5 (distinguishing Burrow-Giles Lithographic Co. v. Sarony, 111 U.S. 53, 58 (1884)) (interpreting U.S. Const. art. 1, cl. 8, 17 U.S.C. §§ 101, 102(a)).
[vii] Id. at *8.
[viii] The rest of the decision and the briefs submitted to the court provide a lot of additional gloss on the conceptual issues involved, the history of copyright in the U.S., and the interpretation of the constitution and copyright statutes related to these issues, which is worth examining further if you are interested in policy issues regarding generative AI.
[ix] Isaiah Poritz, AI’s Thorny Copyright Questions Create International Patchwork, Bloomberg Law (Dec. 27, 2023), https://news.bloomberglaw.com/ip-law/ais-thorny-copyright-questions-create-international-patchwork (noting that Chinas courts had ruled an AI image created with Stable Diffusion could receive copyright protection and that existing UK law could also be interpreted as supporting copyrightability of AI works).
[x] Failing to disclose may not invalidate the entire application however. See Letter from Robert J. Kasunic, Associate Register of Copyrights and Director of Registration Policy and Practice, U.S. Copyright Office to Van Lindberg, Taylor English Duma LLP (Feb. 21, 2023) (on file with U.S. Copyright Office), available at https://www.copyright.gov/docs/zarya-of-the-dawn.pdf(determining to cancel original certificate and issue a replacement one where work included AI-made material not disclosed alongside human-made material). However, this may depend on whether the inaccurate information in the application was an inadvertent mistake or submitted with knowledge that it was inaccurate, the latter of which could render the registration invalid. Gold Value Int’l Textile, Inc. v. Sanctuary Clothing, LLC, 925 F.3d 1140, 1144-45 (9th Cir. 2019) (citing 17 U.S.C. § 411(b) (2008)).
[xi] See generally Lawsuits v. AI: The Trial of AI: Master List of lawsuits v. AI, ChatGPT, OpenAI, Microsoft, Meta, Midjourney & Other AI cos., Chat GPT Is Eating the World (updated Dec. 27, 2023), https://chatgptiseatingtheworld.com/2023/12/27/master-list-of-lawsuits-v-ai-chatgpt-openai-microsoft-meta-midjourney-other-ai-cos/ (accessed Feb. 18, 2024) (this list only includes suits against AI companies, not necessarily those against users who publish AI-generated works); Joe Panettieri, Generative AI Lawsuits Timeline: Legal Cases vs. OpenAI, Microsoft, Anthropic and More, Sustainable Tech Partner (Feb. 14, 2024), https://sustainabletechpartner.com/topics/ai/generative-ai-lawsuit-timeline/ (accessed Feb. 18, 2024).
[xii] See, e.g., Boisson v. Banian, Ltd., 273 F.3d 262, 267-68 (2001).
[xiii] 17 U.S.C. § 107. Compare Campbell v. Acuff-Rose Music, Inc., 510 U.S. 575, 578 (1994) with Harper & Row Publishers, Inc. v. Nation Enters., 471 U.S. 539, 566 (1985) and Andy Warhol Found. for the Visual Arts, Inc. v. Goldsmith, 598 U.S. 508, 143 S. Ct. 1258 (2023).
[xiv] Andy Warhol Found. for the Visual Arts, Inc., 143 S. Ct. at 1274-75.
[xv] See, e.g. Seuss Enters., L.P. v. ComicMix LLC, 983 F.3d 443, 453 (9th Cir. 2020) (finding that mashup book taking Dr. Seuss’s Oh the Places You’ll Go and retheming it with Star Trek characters and environments was not transformative, because it conveyed the same sentiment as the original).
[xvi] See, e.g., Stuart A. Thompson, We Asked A.I. to Create the Joker. It Generated a Copyrighted Image, N.Y. Times (Jan. 25, 2024), https://www.nytimes.com/interactive/2024/01/25/business/ai-image-generators-openai-microsoft-midjourney-copyright.html (accessed Feb. 19, 2024) (noting that in AI tests, “ ‘Videogame hedghog’ returned Sonic, Sega’s wisecracking protagonist. ‘Animated toys’ created a tableau featuring Woody, Buzz and other characters from Pixar’s ‘Toy Story.’ When [the testers] tried ‘popular movie screencap,’ out popped Iron Man, the Marvel character, in a familiar pose.”) (later noting that asking Microsoft Bing to “create an original image of an Italian video game character” resulted in images that were clearly identifiable as Nintendo’s Super Mario in December 2023, but by January guardrails Microsoft had put in place had made the results show obvious Mario images somewhat less consistently).
[xvii] One more dimension to copyright risks is that if you happen to be aware that AI has copied another work and by doing so may have removed copyright management information from it, you may also be liable under the DMCA for distributing the AI’s infringing output. 17 U.S.C. § 1202(b)(3).
[xviii] Along with copyright infringement, this is one of the claims in the Getty Images lawsuit against Stability AI, that the AI output sometimes even incorporates their branding. See Amend. Compl., Getty Images (US), Inc. v. Stability AI Ltd., No. 23-CV-00135-UNA (D. Del. Mar. 29, 2023), Dkt. No. 13.
[xix] The scope of the protection varies depending on which jurisdiction’s law is at issue. Compare Cal. Civ. Code § 3344 (protecting against unauthorized use in products and advertising explicitly) with N.Y. Civ. Rights Law § 51 (protecting against unauthorized use for advertising or trade purposes) and 15 U.S.C. § 1125(a) (protecting against misleading use suggesting affiliation or endorsement “in connection with any goods or services… in commerce”).
[xx] See generally Copyright and Artificial Intelligence, U.S. Copyright Office, https://www.copyright.gov/ai/ (accessed Feb. 18, 2024); see U.S. Copyright Office, Artificial Intelligence and Copyright, Notice of Inquiry and Request for Comment, 88 Fed. Reg. 59,942 (Aug. 30, 2023), available at http://www.govinfo.gov/content/pkg/FR-2023-08-30/pdf/2023-18624.pdf .
[xxi] U.S. Copyright Office, Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence, 88 Fed. Reg. 16,190 (Mar. 16, 2023), available at https://www.federalregister.gov/documents/2023/03/16/2023-05321/copyright-registration-guidance-works-containing-material-generated-by-artificial-intelligence .
[xxii] See Docket, Thaler v. Perlmutter, No. 23-5233 (D.C. Cir.), available at https://ecf.cadc.uscourts.gov/n/beam/servlet/TransportRoom?servlet=CaseSummary.jsp&caseNum=23-5233&incOrigDkt=Y&incDktEntries=Y (accessed Feb. 18, 2024).
[xxiii] See, e.g., Chase DiFeliciantonio, Who made that? California bill would require ‘watermarks’ to signal content created by AI, S.F. Chronicle (Jan. 19, 2024), https://www.sfchronicle.com/bayarea/article/california-bill-require-watermarks-signal-18617726.php (accessed Feb. 18, 2024).
[xxiv] See Cassandra Gadt-Sheckter, et al., Artificial Intelligence Review and Outlook – 2024, Gibson Dunn (Feb. 8, 2024), https://www.gibsondunn.com/artificial-intelligence-review-and-outlook-2024/ ;Nancy A. Fischer, et al., Unleashing the AI Imagination: A Global Overview of Generative AI Regulations, Pillsbury (Aug. 11, 2023), https://www.pillsburylaw.com/en/news-and-insights/ai-regulations-us-eu-uk-china.html ; Poritz, AI’s Thorny Copyright Questions Create International Patchwork, supra.
[xxv] See, e.g., Terms of use, OpenAI (updated Nov. 14, 2023), https://openai.com/policies/terms-of-use (accessed Feb. 18, 2024); Terms of Service, Midjourney (updated Dec. 22, 2023), https://docs.midjourney.com/docs/terms-of-service (accessed Feb. 18, 2024); User Agreement, Gemini (updated Feb. 16, 2024), https://www.gemini.com/legal/user-agreement#section-welcome-to-gemini (accessed Feb. 18, 2024).
[xxvi] Introducing the Microsoft Copilot Copyright, Microsoft (Sept. 7, 2023), https://www.microsoft.com/en-us/licensing/news/microsoft-copilot-copyright-commitment (accessed Feb. 18, 2024).
[xxvii] Notably, it is unclear what the distinction between “claims based on … trademark” and “claims based on trademark use in trade or commerce,” is intended to mean, since any trademark infringement normally requires the infringer to have used the mark in commerce except for claims against a manufacturer who produces goods or services with the mark to be sold by someone else. 17 U.S.C. §§ 1114(1), 1125(a) (both requiring use in commerce); see also N.Y. Gen. Bus. Law § 360-k (requiring use “in connection with the sale, distribution, offering for sale, or advertising of any goods or services”); Cal. Bus. & Prof. Code § 14245(a) (similar).
[xxviii] Dan Milmo, ‘Impossible’ to create AI tools like ChatGPT without copyrighted material, OpenAI says, The Guardian (Jan. 8, 2024), https://www.theguardian.com/technology/2024/jan/08/ai-tools-chatgpt-copyrighted-material-openai (accessed Feb. 18, 2024); Sebastian Klovig Skelton, GenAI tools ‘could not exist’ if firms are made to pay copyright, ComputerWeekly.com (Jan. 26, 2024), https://www.computerweekly.com/news/366567750/GenAI-tools-could-not-exist-if-firms-are-made-to-pay-copyright (accessed Feb. 18, 2024).
[xxix] According to at least one source, within a few months following China’s early recognition of copyright in AI output, Chinese game companies had already laid off large percentages of their artists in favor of using AI. See Viola Zhou, AI is already taking video game illustrators’ jobs in China, Rest of World (Apr. 11, 2023), https://restofworld.org/2023/ai-image-china-video-game-layoffs/ (accessed Feb. 18, 2024).