Goal: to have a working interactive version of the Gen Studio online by January ~20, in time to be integrated into the February 4th release.
Web hosting: separate from Met website (more flexibility for now)
Feb 4th – large announcement event at the Met in NYC. There are funds to cover travel + hotel for nonlocal attendees. Let’s all try to be there! And think of others to invite.
We should focus on the experience of surfing latent space — traversing a world of possible artworks and synthesizing them. Creating topological collages where a wander through latent space is visually experienced by changes in rendered imagery. Visual experience of super-semantic structure. A pushback to gen works in the Met community is the idea that they are simply “mashups” of existing works, reducing them to remixes that lie between existing collections, outside of curatorial domain. This in mind, we should use similarity relationships to explore new work inspired by existing curation (lying within existing collections) and avoid the feeling of “remixes” of the original collection (a tired and reductive version of AI art more generally).
In that sense, would it be more compelling to only select a single seed work from the existing collection, rather than mixing two together? To show people an interesting number of Met artworks (>2), it’s my impression that the mixing gets quite messy.
a new kind of “mixing” :
Perhaps only one Met seed is used to generate an artwork (no mixing seeds), but for a walk through latent space, visitors could pick 2 (or 3, or n) artworks they like from the Met collection, and we could show the generated pieces evolve into each other, much like:
In this setup, most exposure to art in the existing collection would happen on the tail end: using the generated artwork to find similar pieces at the Met.
Interface to explore existing collection (choose n)
Generation of synthesized artwork (from n? between n?)
Tree/graph of artspace
Integration with related AI/art projects:
Mark Hamilton’s PRO GAN
Matthew Ritchie’s AR fugues (wander through latent space superimposed on museum map?)
How do we want to handle Met label categories which don’t have an obvious BigGAN counterpart? There are many of these. Some options:
limit to categories where there is overlap. studios have tools for certain types of creation, don’t need complete overlap with Met.
creative NN chaining solution? Train smaller GAN on Met artworks alone, use it to generate new artworks, find distance between them and existing Met artworks
What happens to generated artworks?
catalog/graph? need both back and front end here
We still need to test the search for Met artworks (in a different category) similar to a generated artwork. Maybe this will be a kind of “experience transfer” that works? Potentially we can do it using the inverse of the method used to generate the artworks
for a generated artwork, use the same seed to generate an artwork in another category
find the Met artwork closest to the generated artwork in the new category
do this for  categories
Front-end development & UX
MS will need to work on this. Who should we talk to?
Availability in January
email info: see team thread.
Final presentation (which should already include)
Title: Gen Art Studio @ The Met
The Gen Art Studio uses the Met digital collection to introduce people to the Met’s existing art collection and reinvigorate their connection to art by empowering them to create new artwork. In the studio, artwork chosen runs through an AI algorithm called a generative adversarial network to create a new piece of art. The new art & it’s ‘ancestors’ are stored in an ‘artist ancestry graph’ showcasing both the iterative development of art and allowing users to discover new objects in the Met Collection that are similar to their new generated art.
I am hoping that Ryan will be able to get us help — either the MIT interns who are starting in January or somewhere else
if we scrap the idea of mixing two artworks together (and with it the parent/child relationship), and focus instead on the topological wander, what does this look like now? the connection between two artworks is/are the path(s) travelled between them.
most immediate but perhaps ultimately untenable form is trail maps, road maps— but in its most literal shape would get really unmanageably messy at some point
+ luke’s intuition was that mixing images into a static single image would often land you in a part of the space for which there are no traces (for us) of the originals
my intuition looking at our gens at the hack was that if you mix together >2, you always just get the same average image and it’s boring
I think a very big technical question is how you will go from MET image to the GAN seed that generates it convincingly? At the hack we made something that kind of works, but it isn’t good enough to really implement the ideas above. Someone from Microsoft should work on this.
Our approach was simply sampling & taking gradient descent steps on the seed in order to minimize the difference between the gen image & MET image under neural network representation. There are other possibilities, like training a whole other neural network to do the inversion.
can you provide some more details on what is needed for going from a MET image to the GAN seed?