We developed a studio of tools for exploring the digital Met collection by traversing the feature-space of its images. An image-to-seed tool maps images of Met works to sister images generated by BigGAN, a public generative adversarial network. Once you have generated a sister image, a generist-map tool lets you explore the latent space of such images, by combining what you generated with other Met works. If you find a striking combination, a similar image search lets you browse items from the full Met catalog that share features with your find.
These tools are built with the Met’s open image and metadata collection, which include author, origin, categories of work, and other tags. We are also compiling a graph of connections between similar artworks, using a notion of ‘closeness’ between works.
The trained models used in the studio will be available as genesis and inspiration for new works, or to choose existing works relevant to a new work. This will be made tangible to the public in tools they can use to create and parameterize standalone works, or collages covering arbitrary shapes.
MIT team members: Sarah Schwettmann (schwett), Sam Klein (meta.sj), Maddie Cusimano (mcusi), Luke Hewitt (lbh), Matthew Ritchie, visiting artist, Botong Ma (btma)
Microsoft team members: Mark Hamilton (marhamil), Chris Hoder (chrhoder), Wilson Lee (tinglee), Casey Hong, Dalitso Banda, Karthik Rajendran, Manon Knoertzer
Met team members: Kim Benzel, Ancient Near Eastern curator; Julie Arslanoglu
Microsoft MIT Externs: Gillian Belton, Alonso Salas Infante, Diana Nguyen, Darius Bopp, Elaine Zhang
Jupyter notebook (for setting up BigGAN): on Azure instance
Image-to-seed: on Azure
Map generation: deep-art interface on github
Image search: search tool on github
Image evolution: gif + video production by hand