Organizing a directory workflow for a Semantic Segmentation task

2 minute read

Published:

Semantic segmentation is the task of assigning labels to different pixels in an image. So you can think of it as a binary or multi-label classification. That means the targets have the same size as their corresponding images. How do we organize the directory for such a task and how can we make use of the ImageDataGenerator? Let’s start with a simple example.

Binary/Multi-Class Classification

In a typical tensorflow/Keras workflow in which we deal with a classification task the data are organized in the following format:

  • train
    • labelA

      image1, image2, …

    • labelB

      image1, image2, …

    • etc.

  • validation
    • labelA

      image1, image2, …

    • labelB

      image1, image2, …

    • etc.

  • test
    • labelA

      image1, image2, …

    • labelB

      image1, image2, …

    • etc.

Here the classes that we would like to predict are labelA, labelB, etc. During the training/validation/test steps images in batch are transformed on the fly via keras.preprocessing.image.ImageDataGenerator which generates batches of transformed tensors:

train_datagen = ImageDataGenerator(...), valid_datagen = ImageDataGenerator(...)

Then we use flow_from_directory to create train_generator, valid_generator:

train_generator = train_datagen.flow_from_directory( 'PATH_TO_TRAIN', target_size, batch_size, class_mode)

validation_generator = test_datagen.flow_from_directory( 'PATH_TO_VALIDATION', target_size, batch_size, class_mode)

Semantic Segmentation

In semantic segmentation however, in addition to images there exists a set of targets which like images are stored in some files. Therefore, the organization goes as follows:

  • train

    • images

      • dummy_label

        image1, image2, ....
        
    • targets

      • dummy_label

        target1, target2, ....
        
  • validation

    • images

      • dummy_label

        image1, image2, ....
        
    • targets

      • dummy_label

        target1, target2, ....
        

Then in each of the train/validation/test set we need two ImageDataGenerators, One for the images:

train_image_datagen = ImageDataGenerator(...),

and one for the targets:

train_target_datagen = ImageDataGenerator(...).

Subsequently, we have to create the image and target generators that are passed to model.fit:

train_image_generator = train_datagen.flow_from_directory( PATH_TO_TRAIN_IMAGES, class_mode=None, target_size, seed = FIXED_SEED, batch_size) train_target_generator = train_datagen.flow_from_directory( PATH_TO_TRAIN_TARGETS, class_mode=None, target_size, seed = FIXED_SEED, batch_size)

Note that both image_generator and targte_generator require the same seed, otherwise the targets will not match the input images. Furthermore, the class_mode needs to be set to None so that the generator ignores the parent directory of the target and image files : dummy_label. Obviously, both need the same batch_size and target_size.

Conclusion

There are some subtle differences between setting up the directory structure and the flow_from_directory in a semantic segmentation task and that of an image classification task. One has to be aware of these distinctions in order to be able to set up a functional segmentation pipeline.