In a ruling this past August, a federal court declared that the emerging world of AI-generated art, despite its striking resemblance to human creations, stands outside the protective umbrella of copyright law. But this legal verdict is merely the tip of the iceberg, barely skimming the surface of the profound questions that AI-generated art and entertainment brings to the forefront.
Picture this: with a mere tap of a finger, users can orchestrate a symphony of sounds spanning genres, all composed within seconds by algorithms. Users can specify provisions like the type of music, instrumentation, tempo, and whether they want to include vocals, expanding the world of music creation and the endless legal implications that accompany it. If these creations are not copyrightable, are they still subject to other protections, like the First Amendment? Specifically, in a world where AI art is devoid of human ownership, does it possess the same powers as speech?
There are particularly strong implications for rap and hip-hop. These genres, rich with cultural significance, have long been the canvas for Black artists who have boldly and unapologetically explored linguistic boundaries, raising questions about artistic freedom and expression. Specifically, these genres have normalized the use of the n-word by Black musicians as a reclaim weaved in their art.
I will concede that this cultural reclaim and its connection to the rap and hip-hop industries has more complexities than I can begin to delineate in just one post. Even if I did grasp it, the reality is that it would be completely lost on any AI algorithm generating music within those genres.
AI simply cannot understand the complexities of language, especially racially and historically charged language that may be classified as hate speech. In a study from the University of Oxford, researchers tried to train an AI program to properly identify hateful statements by giving thousands of examples. Even then, the program was not effective at properly identifying when the use of certain words was in a hateful manner and when it was not.
The inability to identify when others respectfully used sensitive language implies that it would not be able to effectively use the language itself in a consistently culturally-sensitive manner.
Even if it could, would we be comfortable with it participating?
Say Jay-Z’s “The Story of OJ” or Kanye’s “Violent Crimes” were written by an algorithm. If the law couldn’t determine who owned those songs for the purpose of copyright, how could it determine who is speaking by them? Would it be the performer, the person who generated the prompt, or are we actually prepared to say that the algorithm itself is “speaking?”
The fact is that art and speech already have a nuanced relationship. Hate speech and the law also have a nuanced relationship. As AI-generated music evolves and its potential to enter the market increases, it only adds to the complications. Especially in the music industry, which intersects with culture and the law, having non-human creators creates room for people’s emotions and politics to come into play. AI blurs the lines of whose art the generated product truly is – and thus who the lyrics potentially being generated truly belong to. This not only presents copyright issues that the law is struggling to keep up with. It also raises questions about first amendment and hate speech politics that we may have to answer sooner than we’re all prepared.
 Thaler v. Perlmutter, No. CV 22-1564 (BAH), 2023 WL 5333236 (D.D.C. Aug. 18, 2023).
 See, e.g., AI Music Generator, Soundraw, https://soundraw.io [https://perma.cc/QD58-S2HU] [http://web.archive.org/web/20231010192134/https://soundraw.io/] (last visited Oct. 10, 2023); Boomy, https://boomy.com [https://perma.cc/manage/create?folder=4090] [http://web.archive.org/web/20231010192415/https://boomy.com/] (last visited Oct. 10, 2023); AI Music Generator, Loudly, https://www.loudly.com/ai-music-generator [https://perma.cc/G24F-R8DM] [http://web.archive.org/web/20231010192629/https://www.loudly.com/ai-music-generator] (last visited Oct. 10, 2023).
 Natalie Alkiviadou, Artificial Intelligence and Online Hate Speech Moderation, 19 Sur Int’l J. Hum. Rts. 32 (2022).
 Paul Rottger et al., HateCheck: Functional Tests for Hate Speech Detection Models, in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (2021).
 Kanye West, Violent Crimes, on Ye (Sony Music Ent. 2018); Jay-Z, The Story of OJ, on 4:44 (S. Carter Enters. 2017).