Goal: to have a working interactive version of the Gen Studio online by January ~20, in time to be integrated into the February 4th release.
Web hosting: separate from Met website (more flexibility for now)
Feb 4th – large announcement event at the Met in NYC. There are funds to cover travel + hotel for nonlocal attendees. Let’s all try to be there! And think of others to invite.
We should focus on the experience of surfing latent space — traversing a world of possible artworks and synthesizing them. Creating topological collages where a wander through latent space is visually experienced by changes in rendered imagery. Visual experience of super-semantic structure. A pushback to gen works in the Met community is the idea that they are simply “mashups” of existing works, reducing them to remixes that lie between existing collections, outside of curatorial domain. This in mind, we should use similarity relationships to explore new work inspired by existing curation (lying within existing collections) and avoid the feeling of “remixes” of the original collection (a tired and reductive version of AI art more generally).
In that sense, would it be more compelling to only select a single seed work from the existing collection, rather than mixing two together? To show people an interesting number of Met artworks (>2), it’s my impression that the mixing gets quite messy.
a new kind of “mixing” :
Perhaps only one Met seed is used to generate an artwork (no mixing seeds), but for a walk through latent space, visitors could pick 2 (or 3, or n) artworks they like from the Met collection, and we could show the generated pieces evolve into each other, much like:
In this setup, most exposure to art in the existing collection would happen on the tail end: using the generated artwork to find similar pieces at the Met.
Interface to explore existing collection (choose n)
Generation of synthesized artwork (from n? between n?)
Tree/graph of artspace
Integration with related AI/art projects:
Mark Hamilton’s PRO GAN
Matthew Ritchie’s AR fugues (wander through latent space superimposed on museum map?)
How do we want to handle Met label categories which don’t have an obvious BigGAN counterpart? There are many of these. Some options:
limit to categories where there is overlap. studios have tools for certain types of creation, don’t need complete overlap with Met.
creative NN chaining solution? Train smaller GAN on Met artworks alone, use it to generate new artworks, find distance between them and existing Met artworks
What happens to generated artworks?
catalog/graph? need both back and front end here
We still need to test the search for Met artworks (in a different category) similar to a generated artwork. Maybe this will be a kind of “experience transfer” that works? Potentially we can do it using the inverse of the method used to generate the artworks
for a generated artwork, use the same seed to generate an artwork in another category
find the Met artwork closest to the generated artwork in the new category
do this for  categories
Front-end development & UX
MS will need to work on this. Who should we talk to?
Availability in January
email info: see team thread.
Final presentation (which should already include)
Title: Gen Art Studio @ The Met
The Gen Art Studio uses the Met digital collection to introduce people to the Met’s existing art collection and reinvigorate their connection to art by empowering them to create new artwork. In the studio, artwork chosen runs through an AI algorithm called a generative adversarial network to create a new piece of art. The new art & it’s ‘ancestors’ are stored in an ‘artist ancestry graph’ showcasing both the iterative development of art and allowing users to discover new objects in the Met Collection that are similar to their new generated art.
Links to staging sites: http://getartstudioweb.azurewebsites.net/
How it uses AI
GAN networks to generate new images
(if implemented) Image similarity based on image featurization to find art that resembles your generated art
(new idea) leverage the caption generator from the other team
How concept aligns to goals of hack event (i.e. how does this help spark new global connections?)
Art work is displayed without listing the artist, period or any other meta-data. This will prompt the users to select those they enjoy best versus focusing on traditionally famous pieces of art
Showcases how 21st century technology (AI) can be used to create new art and discover old art
Interactive tool will expose users to new art that they would likely not see otherwise or in the Met
Sequential generation of an image from others creates a discussion around art ancestry and how art is very iterative and builds on past work.
(new) What do you hope comes from this project after the launch?
Would require additional effort to complete the concept into either an Interactive booth within the MET or portal hosted online within the MET website.
Combine this tool with reading/presentations discussing:
How this type of generated art is an example of how most art work is not truly unique but built iteratively on past concepts and ideas (do we have examples?)
The value of discovering new types of art & interests. It would be interesting to see if users select art form areas that are not considered to be ‘popular’
If the site was really popular – we could analyze the community graph to look for themes and ideas that emerge. What type of art do people like? What has been generated?
Resourcing requested to get the project in a publicly presentable format by Jan 20th (in order to meet event timing)
Development/Data Science: 2+?
Clean up APIs for generating sample of images, handling user image selections & generating art
DB infrastructure for storing generated images and creating the ‘art graph’
Set up GAN model as API, optimize to run faster
Build image similarity model to discover images similar to the generated art
Re-train GAN on met data
interactive website allowing the user to select images, view results, update their choices
Display the ‘graph of artistic ancestry’
Design an engaging layout
Demo space requirements for Feb 4th