Skip to main content

Image2Seed Overview

Image2Seed Overview
·
Contributors (2)
CH
Published
Feb 04, 2019

Task:

The goal of this task is to create a DNN that will map an image from the Met’s collection to a seed for the MetGAN.

The purpose of this functionality will be to:

  • Allow the user to select an image in the collection graph & generate a new image based on that selection

  • (stretch) allow the user to initially select an image to explore around

Proposed Approach

For this task, we propose that we transfer learn a ResNet50 model with Keras in python.

Pseudo-steps:

  • Generate a large number of image-seeds (512 decimal vector), S, that spread the space of possible seeds

  • Use these seeds to generate art from the metGAN, G(S), which is a 512x512 image

  • Transfer learn the pre-loaded ResNet50 model in Keras

    • input image is generated G(S), labels are the corresponding generated seeds. To do this you can create a pandas dataframe of the existing images and seeds.

    • Add output layer to map the ResNet50, 2048 vector output to a 512 vector

    • loss function: L2-distance

  • Save this model

  • Use the saved model and deploy as an API (spec below)

API Spec

Request URI

POST {Endpoint}/Image2Seed/

Request Body

Media Types: "application/octet-stream", "multipart/form-data"

Name

Type

Description

Image

  • object

Sample Response:

{

"seed": [0 0 0 0 0 … ]

}

Resources

  • You will likely use pandas and numpy datasets. Here’s a short tutorial, more are available online: https://pandas.pydata.org/pandas-docs/stable/10min.html

  • ResNet50 in Keras: https://blog.keras.io/category/tutorials.html

  • Transfer learning in Keras. See file here:

  • https://keras.io/

  • for development:

    • https://azuremarketplace.microsoft.com/en-au/marketplace/apps/microsoft-ads.dsvm-deep-learning

    • https://azure.microsoft.com/en-us/services/machine-learning-service/

Other Possible Approaches

  • Do SGD on the weights in the seed-retrieval neural network to maximize the similarity metric between the BIGGAN output and input image. (Training set = MET Images.). Use L2 loss on the pixels + L2 loss on layer activations in AlexNet (or ResNet)

Comments
1
?
Chris Hoder: https://cwhstorage123.blob.core.windows.net/website/index.html endpoint: https://imagedocker.azurewebsites.net/FindSimilarImages