Skip to main content

Post-hackathon development

Post-hackathon development
·
Contributors (3)
CH
Published
Feb 04, 2019

Goal: to have a working interactive version of the Gen Studio online by January ~20, in time to be integrated into the February 4th release.


Web hosting: separate from Met website (more flexibility for now)

Feb 4th – large announcement event at the Met in NYC. There are funds to cover travel + hotel for nonlocal attendees. Let’s all try to be there! And think of others to invite.

We should focus on the experience of surfing latent space — traversing a world of possible artworks and synthesizing them. Creating topological collages where a wander through latent space is visually experienced by changes in rendered imagery. Visual experience of super-semantic structure. A pushback to gen works in the Met community is the idea that they are simply “mashups” of existing works, reducing them to remixes that lie between existing collections, outside of curatorial domain. This in mind, we should use similarity relationships to explore new work inspired by existing curation (lying within existing collections) and avoid the feeling of “remixes” of the original collection (a tired and reductive version of AI art more generally).

In that sense, would it be more compelling to only select a single seed work from the existing collection, rather than mixing two together? To show people an interesting number of Met artworks (>2), it’s my impression that the mixing gets quite messy.

a new kind of “mixing” :

Perhaps only one Met seed is used to generate an artwork (no mixing seeds), but for a walk through latent space, visitors could pick 2 (or 3, or n) artworks they like from the Met collection, and we could show the generated pieces evolve into each other, much like:

In this setup, most exposure to art in the existing collection would happen on the tail end: using the generated artwork to find similar pieces at the Met.

Studio Components

  1. Interface to explore existing collection (choose n)

  2. Generation of synthesized artwork (from n? between n?)

  3. Tree/graph of artspace

  4. Integration with related AI/art projects:

    • Mark Hamilton’s PRO GAN

    • Matthew Ritchie’s AR fugues (wander through latent space superimposed on museum map?)

Outstanding questions

  1. How do we want to handle Met label categories which don’t have an obvious BigGAN counterpart? There are many of these. Some options:

    • limit to categories where there is overlap. studios have tools for certain types of creation, don’t need complete overlap with Met.

    • creative NN chaining solution? Train smaller GAN on Met artworks alone, use it to generate new artworks, find distance between them and existing Met artworks

  2. What happens to generated artworks?

    • printing option?

    • catalog/graph? need both back and front end here

  3. We still need to test the search for Met artworks (in a different category) similar to a generated artwork. Maybe this will be a kind of “experience transfer” that works? Potentially we can do it using the inverse of the method used to generate the artworks

    • for a generated artwork, use the same seed to generate an artwork in another category

    • find the Met artwork closest to the generated artwork in the new category

    • do this for [5] categories

  4. Front-end development & UX

    MS will need to work on this. Who should we talk to?

Team’s Bandwidth

Person

Availability in January

Chris Hoder

5-10 hrs/week

Sarah

“ “

email info: see team thread.

  • Final presentation (which should already include)

    • Title: Gen Art Studio @ The Met

    • Short summary:

The Gen Art Studio uses the Met digital collection to introduce people to the Met’s existing art collection and reinvigorate their connection to art by empowering them to create new artwork. In the studio, artwork chosen runs through an AI algorithm called a generative adversarial network to create a new piece of art. The new art & it’s ‘ancestors’ are stored in an ‘artist ancestry graph’ showcasing both the iterative development of art and allowing users to discover new objects in the Met Collection that are similar to their new generated art.

  • Links to staging sites: http://getartstudioweb.azurewebsites.net/

  • How it uses AI

    • GAN networks to generate new images

    • (if implemented) Image similarity based on image featurization to find art that resembles your generated art

    • (new idea) leverage the caption generator from the other team

  • How concept aligns to goals of hack event (i.e. how does this help spark new global connections?)

    • Art work is displayed without listing the artist, period or any other meta-data. This will prompt the users to select those they enjoy best versus focusing on traditionally famous pieces of art

    • Showcases how 21st century technology (AI) can be used to create new art and discover old art

    • Interactive tool will expose users to new art that they would likely not see otherwise or in the Met

    • Sequential generation of an image from others creates a discussion around art ancestry and how art is very iterative and builds on past work.

  • (new) What do you hope comes from this project after the launch?

    • Would require additional effort to complete the concept into either an Interactive booth within the MET or portal hosted online within the MET website.

    • Combine this tool with reading/presentations discussing:

      • How this type of generated art is an example of how most art work is not truly unique but built iteratively on past concepts and ideas (do we have examples?)

      • The value of discovering new types of art & interests. It would be interesting to see if users select art form areas that are not considered to be ‘popular’

    • If the site was really popular – we could analyze the community graph to look for themes and ideas that emerge. What type of art do people like? What has been generated?

  • Resourcing requested to get the project in a publicly presentable format by Jan 20th (in order to meet event timing)

    • Development/Data Science: 2+?

      • Needs:

        • Clean up APIs for generating sample of images, handling user image selections & generating art

        • DB infrastructure for storing generated images and creating the ‘art graph’

        • Set up GAN model as API, optimize to run faster

        • Build image similarity model to discover images similar to the generated art

        • Re-train GAN on met data

    • Front-end:

      • 1 dev

      • Needs:

        • interactive website allowing the user to select images, view results, update their choices

        • Display the ‘graph of artistic ancestry’

        • Design an engaging layout

    • Demo space requirements for Feb 4th

Comments
10
?
Chris Hoder: I am hoping that Ryan will be able to get us help — either the MIT interns who are starting in January or somewhere else
?
Maddie Cusimano: if we scrap the idea of mixing two artworks together (and with it the parent/child relationship), and focus instead on the topological wander, what does this look like now? the connection between two artworks is/are the path(s) travelled between them. most immediate but perhaps ultimately untenable form is trail maps, road maps— but in its most literal shape would get really unmanageably messy at some point
?
Maddie Cusimano: +luke’s intuition was that mixing images into a static single image would often land you in a part of the space for which there are no traces (for us) of the originalsmy intuition looking at our gens at the hack was that if you mix together >2, you always just get the same average image and it’s boring
?
Maddie Cusimano: I think a very big technical question is how you will go from MET image to the GAN seed that generates it convincingly? At the hack we made something that kind of works, but it isn’t good enough to really implement the ideas above. Someone from Microsoft should work on this. Our approach was simply sampling & taking gradient descent steps on the seed in order to minimize the difference between the gen image & MET image under neural network representation. There are other possibilities, like training a whole other neural network to do the inversion.
?
Chris Hoder: can you provide some more details on what is needed for going from a MET image to the GAN seed?
+ 1 more...
?
Maddie Cusimano: roots/routes
?
Maddie Cusimano: are there any controllable or stochastic parameters of the wanderings between one image and another? i.e., will the algorithm deterministically come up with the same path if you chose the same images as someone else? or are there multiple potential paths between two images?might be interesting to leverage the noise-width param somehow (the one that led to that ewer surrounded by festive bits)
?
Chris Hoder: for Feb, could we pre-compute a large set of these traversals and then have the user explore from this set? This may solve the issue of performance since generating these images can take time
+ 1 more...