Analyzing Experts’ Low-level Perception Tasks While Doing 3D Image Segmentation
Anahita Sanandaji (Oregon State University)
Co-authors: Cindy Grimm (Oregon State University), Ruth West (University of North Texas)
3D volume segmentation is a fundamental process in many scientific and medical applications. Producing accurate segmentations, in an efficient way, is challenging, in part due to low imaging data quality (e.g., noise and low image resolution), and ambiguity in the data that can only be resolved with higher-level knowledge of the structure. Automatic algorithms do exist, but there are many use cases where they fail. The gold standard is still manual segmentation or review. Unfortunately, even for an expert, manual segmentation is laborious, time consuming, and prone to errors. Existing 3D segmentation tools are often designed based on the underlying algorithm, and do not take into account human mental models, their lower-level perception abilities, and higher-level cognitive tasks. We propose to analyze manual segmentation as a human-computer interaction paradigm to gain a better understanding of both low-level (perceptual) actions, and higher-level tasks and decision making process. We initially employed formative field studies using our novel hybrid protocol that blends observation, surveys, and eye-tracking. We then, developed and validated data coding schemes to discern segmenters’ low-level actions, higher-level tasks, and overall task structures. We could successfully identify workflow patterns and different segmentation strategies utilized by expert versus novice segmenters.