Illustration by Julia Zimmerman
By Juniper Lovato, Julia Zimmerman, and Jennifer Karson
Generative AI is rapidly transforming the art world, creating significant tensions not only between artists and model creators but also among artists. These tools are capable of producing almost instantaneous art-like outputs on an unprecedented scale, which is changing the means of production not only for many artists but also for consumers of creative outputs. While some celebrate Generative AI for its potential to enhance their processes and democratize creativity, making artistic expression accessible to more people, others criticize its ethical implications. This tension highlights the interplay between technological innovation and the rights of creators in this complex socio-technical system.
A major concern in this context is the exploitation of artists, whose original works are often used as training data without proper credit or compensation and who now have to compete with Generative AI art models that are hyper-efficient, inexpensive digital twins of their past selves. Artists are well known for their willingness to embrace and challenge the limits of innovation. They are quick to integrate emerging technologies that can enhance their practice, but they are also justifiably protective of their artistry and livelihoods. Generative AI tools exemplify this tension by providing new avenues to experiment with forms and techniques, but they also raise serious ethical and economic red flags. Many of these issues focus on ownership, fair use, and transparency. These worries are compounded by the opaque practices of many Generative AI model creators who often use publicly available art without direct consent from artists. Many artists worry that Generative AI artwork could flood the market and displace human artists, especially in entry-level positions and industries like gaming and film. These anxieties reflect broader concerns about the economic but also very real emotional impacts of this technology. Importantly, if these concerns are not addressed, the AI community risks alienating a critical group of constituents, jeopardizing collaboration and trust in the evolving creative landscape.
In our recent study, Foregrounding Artist Opinions: A Survey on Transparency, Ownership, and Fairness in AI Generative Art [1], we worked to explore these tensions by gathering opinions on Generative AI directly from artists. Through a survey of 459 artists, we observed their nuanced perspectives on how Generative AI both empowers and challenges their work. Although we designed our survey to reflect the demographics of working artists in the US, we recognize that artists – like any group of people – are not a monolith, either within the US or globally. We see our survey as a much-needed starting point for incorporating the voices of artists into a discourse that has so far centered on the extended tech sphere of influence.
We also recognize that different art practices feel very differently about these topics, which makes sense as the norms and attitudes toward technology differ across fields, and we see this in our survey as well. More physically oriented art forms like sculpting or crafts were much less likely to see Generative AI as a positive development, while photographers and designers were more favorable. We also see that familiarity with AI tools makes a big difference; if an artist has engaged with Generative AI in the past, they are less likely to see it as a threat to their livelihood.
Our survey results showed a divide among artists about the utility and threat of Generative AI. While 44% viewed it as a positive development for the art field, 61% considered it a threat to the art workforce. Notably, 22% acknowledged both its potential benefits and risks, which underscores the complex balance of opportunity and concern that many artists currently feel. Compensation is another critical issue. While half of the respondents did not require financial compensation to use their work as training data in a model, they strongly opposed for-profit companies benefiting from their art. This highlights a concern over the commercialization of creative labor without equitable benefit-sharing.
One of the most consistent findings from our survey is that artists overwhelmingly want more transparency. Over 80% of our survey respondents believe that creators of AI models should be required to disclose in detail the artworks used to train their models. Without such disclosures, artists fear that their works could be repurposed without acknowledgment, consent, or compensation. One concern is that public mistrust in sharing works (due to fears of being used as training data) may spread within the artistic community and decrease the overall sharing of creative ideas and knowledge.
We speculate these results indicate a fundamental difference in motivation between the organizations behind Generative AI models and the artists. Artists often feel a shared responsibility to enrich human culture for the benefit of everyone. Participants in an ecosystem based on collectivity, exchange, and generosity can suffer a significant breach of trust when other individuals or companies are perceived as primarily operating in an extractive manner. Especially when the extractive role of the AI model creators converts a cultural exchange into a significantly one-way monetary funnel, a strategy recognizable in other Big Tech behaviors like knowledge predation [2]. Claims about the democratization of art may feel disingenuous in this context when the model creators seem poised to benefit from, without meaningfully contributing to, the commons [3]. The pain point is not in using or taking someone else’s work but in violating the values that motivated sharing that work in the first place. Many artists, in fact, want their art to be used and shared, but they want that to be done in good faith with the shared responsibility we all have to our collective works (culture, cumulative knowledge): no one likes to feel taken advantage of. The perceived harm is not solely financial, indicating that remediation should not be limited to financial compensation: the motivations of the system that creates and uses the models, which right now are significantly financial and political, need to be reconciled with the motivations that undergird our cultural commons.
Our research stresses the need for change within the tech industry, including calling for more collaboration with its constituents, such as artists, when shaping policies around Generative AI. Measures to this end include obtaining consent for using artworks in training data, establishing fair recognition for their contribution to the models, and ensuring transparency in AI model development. Encouragingly, there is a growing awareness of the need for ethical and equitable approaches, resulting in positive recent developments [4], such as Harvard’s release of a public domain dataset for AI training. At the same time, a turn toward smaller, more efficient models that are championed by some industry leaders suggests an alternative path forward that could align technological innovation with more fair intellectual property practices, benefiting both creators and the broader cultural ecosystem. Additionally, it appears that some popular image generators will no longer generate works in the style of established artists.
The art world has a rich history of adapting to technological advancements. Generative AI represents a new chapter in this long evolution, but its success will rely on model creators’ ability to respect and support the human creators who are the foundation of cultural production. Their models would go hungry without them as the source.
[1] Lovato, J., Zimmerman, J. W., Smith, I., Dodds, P., & Karson, J. L. (2024, October). Foregrounding artist opinions: A survey study on transparency, ownership, and fairness in AI generative art. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (Vol. 7, pp. 905-916).
[2] Rikap, C., & Lundvall, B. Å. (2022). Big tech, knowledge predation and the implications for development. Innovation and Development, 12(3), 389-416.
[3] Liesenfeld, A., & Dingemanse, M. (2024, June). Rethinking open source generative AI: open washing and the EU AI Act. In The 2024 ACM Conference on Fairness, Accountability, and Transparency (pp. 1774-1787).
[4] Knibbs, K. (2024, December 12). Harvard is releasing a massive free AI training dataset funded by OpenAI and Microsoft, Wired.