Skip to content

uta-smile/lab_challenge_23fall

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SMILE Lab Challenge

1 Overview

A cell segmentation task.

It officially starts from 10/1 and ends at 11:59pm on 2023/1/11 in GMT-5.

2 Datasets

We provide a training dataset with 37 images and a testing dataset with 14 images. The shape of the image is 1000x1000. The images are in the im folder. The ground truth of the training dataset is in the mask folder. The testing dataset does not have ground truth.

To load image, you can use the python pillow packages:

from PIL import Image
import numpy as np


Image.open("path/to/image.png")

# for gray mask
np.where(Image.open("path/to/mask.png").convert("L") > 127, 1, 0).astype("float32")

3 Evaluation

This competition is evaluated on the mean Dice coefficient. The Dice coefficient can be used to compare the pixel-wise agreement between a predicted segmentation and its corresponding ground truth. The formula is given by:

\[2 × | X ∩ Y | / (| X | + |y|)\]

Or you can use python to calculate it by:

2 * np.sum(x * y) / (np.sum(x) + np.sum(y))

where \(X\) is the predicted set of pixels, and \(Y\) is the ground truth. The Dice coefficient is defined to be 1 when both \(X\) and \(Y\) are empty. The leaderboard score is the mean of the Dice coefficients for each image in the test set.

4 Submission File

In order to reduce the submission file size, our metric uses run-length encoding on the pixel values. Instead of submitting an exhaustive list of indices for your segmentation, you will submit pairs of values that contain a start position and a run length. E.g. ‘1 3’ implies starting at pixel 1 and running a total of 3 pixels (1,2,3).

The competition format requires a space delimited list of pairs. For example, ‘1 3 10 5’ implies pixels 1,2,3,10,11,12,13,14 are to be included in the mask. The metric checks that the pairs are sorted, positive, and the decoded pixel values are not duplicated. The pixels are numbered from top to bottom, then left to right: 1 is pixel (1,1), 2 is pixel (2,1), etc.

The file should contain a header and have the following format:

img,pixels
1,1 1 5 1
2,1 1
3,1 1

You can convert to this format using the following python code:

def rle(arr: np.ndarray) -> str:
  """Run length encoding."""
  arr1d = np.asarray(arr).flatten()
  arr1d = np.pad(arr1d, (1, 1), mode="constant", constant_values=0)
  runs = np.where(arr1d[1:] != arr1d[:-1])[0] + 1
  runs[1::2] -= runs[::2]
  return " ".join(map(str, runs))

5 Code related

The code entry point is src/train/train.py.

5.1 Requirements

PDM
https://pdm.fming.dev/latest/

5.2 Install dependencies

pdm install

Or to use the CUDA version:

pdm install -G ":all"

5.3 Run

To train the model, run

XLA_PYTHON_CLIENT_PREALLOCATE=false XLA_FLAGS='--xla_gpu_deterministic_ops=true' CUDA_VISIBLE_DEVICES=0 pdm run python -m train.train --no-infer --no-predict

To inference the model, run

XLA_PYTHON_CLIENT_PREALLOCATE=false XLA_FLAGS='--xla_gpu_deterministic_ops=true' CUDA_VISIBLE_DEVICES=0 pdm run python -m train.train --no-train --no-predict

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages