Skip to main content

Image2Seed Overview

Published onFeb 04, 2019
Image2Seed Overview


The goal of this task is to create a DNN that will map an image from the Met’s collection to a seed for the MetGAN.

The purpose of this functionality will be to:

  • Allow the user to select an image in the collection graph & generate a new image based on that selection

  • (stretch) allow the user to initially select an image to explore around

Proposed Approach

For this task, we propose that we transfer learn a ResNet50 model with Keras in python.


  • Generate a large number of image-seeds (512 decimal vector), S, that spread the space of possible seeds

  • Use these seeds to generate art from the metGAN, G(S), which is a 512x512 image

  • Transfer learn the pre-loaded ResNet50 model in Keras

    • input image is generated G(S), labels are the corresponding generated seeds. To do this you can create a pandas dataframe of the existing images and seeds.

    • Add output layer to map the ResNet50, 2048 vector output to a 512 vector

    • loss function: L2-distance

  • Save this model

  • Use the saved model and deploy as an API (spec below)

API Spec

Request URI

POST {Endpoint}/Image2Seed/

Request Body

Media Types: "application/octet-stream", "multipart/form-data"





  • object

Sample Response:


"seed": [0 0 0 0 0 … ]



  • You will likely use pandas and numpy datasets. Here’s a short tutorial, more are available online:

  • ResNet50 in Keras:

  • Transfer learning in Keras. See file here:


  • for development:



Other Possible Approaches

  • Do SGD on the weights in the seed-retrieval neural network to maximize the similarity metric between the BIGGAN output and input image. (Training set = MET Images.). Use L2 loss on the pixels + L2 loss on layer activations in AlexNet (or ResNet)

Chris Hoder: