
Research Article
FocAnnot: Patch-Wise Active Learning for Intensive Cell Image Segmentation
@INPROCEEDINGS{10.1007/978-3-030-67540-0_21, author={Bo Lin and Shuiguang Deng and Jianwei Yin and Jindi Zhang and Ying Li and Honghao Gao}, title={FocAnnot: Patch-Wise Active Learning for Intensive Cell Image Segmentation}, proceedings={Collaborative Computing: Networking, Applications and Worksharing. 16th EAI International Conference, CollaborateCom 2020, Shanghai, China, October 16--18, 2020, Proceedings, Part II}, proceedings_a={COLLABORATECOM PART 2}, year={2021}, month={1}, keywords={Active learning Intensive cell image Duplicate annotation Semantic segmentation}, doi={10.1007/978-3-030-67540-0_21} }
- Bo Lin
Shuiguang Deng
Jianwei Yin
Jindi Zhang
Ying Li
Honghao Gao
Year: 2021
FocAnnot: Patch-Wise Active Learning for Intensive Cell Image Segmentation
COLLABORATECOM PART 2
Springer
DOI: 10.1007/978-3-030-67540-0_21
Abstract
In the era of deep learning, data annotation becomes an essential but costly work, especially for the biomedical image segmentation task. To tackle this problem, active learning (AL) aims to select and annotate a part of available images for modeling while retaining accurate segmentation. Existing AL methods usually treat an image as a whole during the selection. However, for an intensive cell image that includes similar cell objects, annotating all similar objects would bring duplication of efforts and have little benefit to the segmentation model. In this study, we present a patch-wise active learning method, namely FocAnnot (focal annotation), to avoid such worthless annotation. The main idea is to group different regions of images to discriminate duplicate content, then evaluate novel image patches by a proposed cluster-instance double ranking algorithm. Instead of the whole image, experts only need to annotate specific regions within an image. This reduces the annotation workload. Experiments on the real-world dataset demonstrate that FocAnnot can save about 15% annotation cost to obtain an accurate segmentation model or provide a 2% performance improvement at the same cost.