Skip to content
SundayWang edited this page Apr 4, 2018 · 2 revisions

The data contains 325 samples, named as hashes, each of which contains 100 consecutive frames of a grayscale video of cilia. 211 of them as training data contains a mask of PNG image, and rest of 114 samples for testing.

A single frame from the grayscale video
Figure 1. A single frame from the grayscale video.

Each mask is the same spatial dimensions (height, width) as the corresponding samples:

  • 2 corresponds to cilia (what you want to predict!)
  • 1 corresponds to a cell
  • 0 corresponds to background (neither a cell nor cilia)

In this project, we only care about the cilia, which is required to predict.

Figure 2. A 2-label segmentation of the video.


Reference:
https://github.com/dsp-uga/sp18/tree/master/projects/p4

Clone this wiki locally