USC Sustainability Hub hosted an open house event on Wednesday. The open house invited many different USC departments and organizations dedicated to creating sustainability solutions on and around campus. Notably, the open house hosted an Artificial Intelligence art exhibition titled “City Ascendant: Imaging the Future of L.A. through A.I.” Other participating organizations included the USG Sustainability Committee, USC Zero Waste Initiative and USC Environmental Student Assembly.
The exhibition consisted of six AI-generated pieces created by a team led by USC Cinematic Arts Professor Kathy Smith. The exhibition was intended to “imagine a more sustainable Los Angeles through the lens of AI.”
Some of the artworks displayed featured scuba divers, futuristic aquafarming and a foliage-filled cargo ship. Sijia Zheng and Zeping Sun, two students completing an MFA in Expanded Animation Research, worked alongside Smith to create the artwork displayed at the USC Sustainability Hub.
The team is part of the Expanded Animation program, which is a collaboration between the School of Cinematic Arts, Department of Physics and Astronomy, Wrigley Institute for Environment and Sustainability and Kaufman School of Dance. Wrigley Institute for Environment and Sustainability “emphasises the intersection of arts and sustainability,” the Wrigley Institute’s Education and Engagement Administrator Hannah Maryanski Kiszla said during an address. “The exhibit offers an alternative beyond the usual doom and gloom narratives” often present in the media.
“NVIDIA provided us with an academic hardware GPU (Graphics Processing Unit) grant in 2022 for my research,” Smith said. The students used this hardware – along with other A.I. models – with the intention of developing “new forms of narrative expression and creative processes,” according to Smith. She encouraged students to “visualize their own image making first and build on their original imagery with intentionality when using AI.”
When asked about the process to create the artwork, Sun said she asked ChatGPT, “What do you think Los Angeles looks like in the future?” and was given several images. “I then put those [generated images] to generate a new picture.”
“But sometimes, I directly give real images to imagine something really interesting,” said Sun.
For example, “sometimes A.I. cannot generate the correct coral reefs,” explained Zheng. ”That’s why we use the picture reference to let the AI understand what the Los Angeles ocean will look like.”
At first, “AI gave me a whole army [of images],” said Sun. “It’s all interesting. AI can imagine many different things.”
After the AI generated an image that the artists liked, they moved to Photoshop, which is used to add details and fix inconsistencies in the artwork before it is printed and displayed.
When asked about how many prompts it took to produce an image ready for Photoshop, Zheng explained that “the average number is like five to six times to generate, and every generation has four different pictures.”
Smith created some of the works in the exhibition as well, and said she “used several of [her] personal photos in combination with the AI apps to generate the new images of downtown Los Angeles.”
Sun believes AI-generated art will continue to be a popular medium in the future, and said she “already brings AI to some commercial and personal creative projects.” She added she can use AI in a project to “create something that’s really effective,” which negates the necessity for an artistic model. “We can use AI to generate everything,” concluded Sun.
This story was updated after publication to correct the name of the USC Wrigley Institute for Environment and Sustainability.
