This task will focus on developing the front-end user experience for the GenArt Studio. As part of this task, we need to implement a website that enables the below functionality. Note that while we attempt to define the experience and broad functionality the site will provide, the implementation details and experience can be refined as we progress.
Goals:
I can display the currently selected generated image
I can ‘explore’ variations on this image interactively. For example progress down a particular dimension & create new images
Example Implementation: Display selected image as a large, center image. Then have variations of this image appear on each side as smaller thumbnails. When a user clicks on the thumbnails, this moves to the center and new generated images are created
Gillian Idea #2:
Spider web of current gen image in the center. Other Met images in same category spread out from center (the web). Selecting one of the met images moves your generated image incrementally in the specified direction in latent space.
idea1: when you get too close in latent space to one met image, it disappears
idea2: the web nodes (not center) are the N closest (furthest) images to your latent space
Gillian Idea #2:
(Discussed with Darius) your creation on its own on one side of the page. Then an assortment of met objects in a grid (cards?) on the other side of the page.
(+ / -) buttons on each image of the met art, and hitting these buttons incrementally moves along the latent space towards or away from the object being clicked.
I can transition from this view to see the most relevant image in the Met content graph (transition experience needs definition)
I can initialize the generated art image with a selected image (either object iD or URL to the image in the collection)
for this page, we propose modifying the jfk files demo (see resources)
Goals:
I can see the 10 closest images to the selected image. (2 levels of depth)
I can see the graph structure of this relationship
I can click on an image and view the meta-data: name, tag, date, etc
I can see which image was matched to my generated image
I can choose an image to use as a seed for a new generated image (i.e return to GenArtExploration page)
Goals:
I can choose a category from a drop down (tags from Met Collection list)
I can see [9] sample images from the selected tag
I can choose a sample image to use with the GenArt Exploration page
Goals:
I can visualize all of the variations on my generated art that i explored
I can save the collection as an image to a file
The MSFT team will be providing an Azure search instance for the content graph
We propose re-skinning this demo for the content graph:
https://jfkfiles.azurewebsites.net
https://github.com/Microsoft/AzureSearch_JFK_Files
AngularJS hosted on Azure Web App