Data from: On the objectivity, reliability, and validity of deep learning enabled bioimage analyses
DOI10.5281/zenodo.4116338Zenodo4116338MaRDI QIDQ6693875FDOQ6693875
Dataset published at Zenodo repository.
Theresa Lüffe, Nikolai Stein, Victoria Schoeffler, Matthias Griebel, Dominik Fiedler, Maren D. Lange, Manju Sasi, Christoph M. Flath, Nicolas Singewald, Robert Blum, Anupam Sah, Lucas B. Comeras, Ramon O. Tasan, Dennis Segebarth, Rohini Gupta, Christina Lillesaar, Alexander Dürr, Corinna Martin, Cora R. von Collenberg, Hans-Christian Pape
Publication date: 21 October 2020
Bioimage analysis of fluorescent labels is widely used in the life sciences. Recent advances in deep learning (DL) allow automating time-consuming manual image analysis processes based on annotated training data. However, manual annotation of fluorescent features with a low signal-to-noise ratio is somewhat subjective. Training DL models on subjective annotations may be instable or yield biased models. In turn, these models may be unable to reliably detect biological effects. An analysis pipeline integrating data annotation, ground truth estimation, and model training can mitigate this risk. To evaluate this integrated process, we compared different DL-based analysis approaches. With data from two model organisms (mice, zebrafish) and five laboratories, we show that ground truth estimation from multiple human annotators helps to establish objectivity in fluorescent feature annotations. Furthermore, ensembles of multiple models trained on the estimated ground truth establish reliability and validity. Our research provides guidelines for reproducible DL-based bioimage analyses.
This page was built for dataset: Data from: On the objectivity, reliability, and validity of deep learning enabled bioimage analyses