Skip to main content

Gen Studio Deployment Plan

Published onJan 04, 2019
Gen Studio Deployment Plan
·

Our goal is for users to experience exploring the latent space around a painting.

These are variations of an existing artwork, that were never created but can be visualized by using neural networks. Users will be able to ‘walk’ around this space and explore generating new artwork by changing along a predefined dimension.

When users have found a desired version of the art, they can then use this new image to search the Met collection for other visually similar images. This feature will empower them to

empowering a new and different type of art generation and experience. In this project we will showcase how a new algorithm called General Adversarial Networks or GANs can be used to create pseudo-realistic images and discover new artwork within the Met Collection. We aim to showcase this work with an interactive website (or app) for the 2/4 MET launch event.

We aim to showcase the following functionality:

  • Generate new art images based off an initial piece of Met artwork

  • Allow users to wander around the latent space of this new art. In the process they will generate new artwork. This can showcase how one piece of art can ‘evolve’ into a different one. (step 1 below)

  • Link the generated artwork to related existing images in the collection. This will serve to help users discover new, existing artwork (Step 3 below)

  • Allow users to search through the Met’s art collection by linking related images together in a graph (Step 3)

  • Take (Step 3 ->1)

How this concept aligns to the goals of the hack event? (how does this help spark new global connections?)

What do you hope comes from this project after the launch

  • Work is developed into an interactive exhibit for users at the Met or on their website

  • Expand the functionality to be truly interactive and not a curated experience

  • Add the new artwork into the art graph. This will let users discover artwork generated by others

  • Provide functionality for sharing on social media & printing posters of the newly generated art

How it uses AI

  • General Adversarial Networks (GAN) will be used to generate new images

  • Image featurization (based on deep neural network) & clustering to compute image similarity

  • (if implemented) Computer Vision API to generate captions

Timeline:

Date

Item

1/7

Interns Start: Orientation, project overview & planning

1/10

MET reviewing requirements for the event

1/11

  1. Draft of desired user experience complete

  2.  Framework functionality for each page complete

1/20

MVP version ready for review. Green light with MET/MIT/Microsoft stakeholders

2/4

Event at the MET

Description of Desired User Experience:

The drawing below outlines a proposed user experience for the project. Specific functionality will be defined with the team.

Tasks:

  • UX (3): Create the interactive website for exploring GAN art & viewing similar images

  • Image2Seed Model & API (2): Create a NN that converts an image to a seed. This seed will then be used by BIG-GAN (or Pro GAN) to transform that seed to an image that populates the exploration experience.

Note: given the limited time, we propose focusing on the UX experience for the project versus ensuring it has ‘limitless’ interactivity. A constrained, pre-destined approach with a good UI will be more impactful than a bad UI and lots of interactivity. With this in mind, we propose a backup, precomputed exploration path for all UX.

References:

Team:

MIT

  • Sara Schwettmann

  • Maddie Cusimano

  • SJ Klein

  • Luke Hewitt

  • Matthew Richie

  • Botong Ma

  • Lana Cook

MIT-interns

Microsoft

  • Chris Hoder

  • Mark Hamilton

Met

  • Kim Benzel

Additional Comments

Open Questions

Comments
0
comment
No comments here
Why not start the discussion?