Adapting Segment Anything Model to Invasive Melanoma Segmentation in Microscopy Slide Images
Qingyuan Liu
EECS Department, University of California, Berkeley
Technical Report No. UCB/EECS-2024-173
August 9, 2024
http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-173.pdf
Melanoma segmentation in Whole Slide Images (WSIs) have demonstrated great usefulness in aiding prognosis and assisting measurement of crucial prognostic factors such as breslow depth and primary invasive tumor size. In this thesis, we present a novel approach that uses the Segment Anything Model (SAM) for automatic melanoma segmentation in microscopy slide images. Our method utilizes an initial semantic segmentation model to generate initial segmentation masks and prompts SAM with prompts generated from the initial segmentation mask. We design a dynamic prompt type determination strategy that uses a combination of centroid prompts and grid prompts to achieve the best coverage over the super large resolution of slide images while maintaining quality of generated prompts. To optimize for invasive melanoma segmentation, we further refine the process of prompt generation by implementing in-situ melanoma detection and low-confidence region filtering. We select Segformer as the initial segmentation model and EfficientSAM as the segment anything model for parameter-efficient fine-tuning. Our experimental results demonstrate that this approach not only surpasses other state-of-the-art melanoma segmentation methods but also perform favorably with a significant gain of IoU over the baseline performance of Segformer.
Advisors: Avideh Zakhor
BibTeX citation:
@mastersthesis{Liu:EECS-2024-173, Author= {Liu, Qingyuan}, Title= {Adapting Segment Anything Model to Invasive Melanoma Segmentation in Microscopy Slide Images}, School= {EECS Department, University of California, Berkeley}, Year= {2024}, Month= {Aug}, Url= {http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-173.html}, Number= {UCB/EECS-2024-173}, Abstract= {Melanoma segmentation in Whole Slide Images (WSIs) have demonstrated great usefulness in aiding prognosis and assisting measurement of crucial prognostic factors such as breslow depth and primary invasive tumor size. In this thesis, we present a novel approach that uses the Segment Anything Model (SAM) for automatic melanoma segmentation in microscopy slide images. Our method utilizes an initial semantic segmentation model to generate initial segmentation masks and prompts SAM with prompts generated from the initial segmentation mask. We design a dynamic prompt type determination strategy that uses a combination of centroid prompts and grid prompts to achieve the best coverage over the super large resolution of slide images while maintaining quality of generated prompts. To optimize for invasive melanoma segmentation, we further refine the process of prompt generation by implementing in-situ melanoma detection and low-confidence region filtering. We select Segformer as the initial segmentation model and EfficientSAM as the segment anything model for parameter-efficient fine-tuning. Our experimental results demonstrate that this approach not only surpasses other state-of-the-art melanoma segmentation methods but also perform favorably with a significant gain of IoU over the baseline performance of Segformer.}, }
EndNote citation:
%0 Thesis %A Liu, Qingyuan %T Adapting Segment Anything Model to Invasive Melanoma Segmentation in Microscopy Slide Images %I EECS Department, University of California, Berkeley %D 2024 %8 August 9 %@ UCB/EECS-2024-173 %U http://www2.eecs.berkeley.edu/Pubs/TechRpts/2024/EECS-2024-173.html %F Liu:EECS-2024-173