We need your help to beta test the new Etch a Cell - Correct a Cell project! Please leave us your feedback using this short Google form: https://forms.gle/g43c6JxiL8kiv15f6
This project is a part of the Etch a Cell organization, where scientists are exploring various aspects of cell biology to gain insights into the progression of various diseases such as diabetes, heart disease, cancer and other chronic illnesses.
Previously, your help has enabled the Etch a Cell - Fat Checker and Etch a Cell - Fat Checker Round 2 teams with the identification and annotation of fat droplets within liver and breast tissue images for studies of Fatty Liver disease and breast cancer. Your annotations have also helped us devise an automated machine learning framework for fat droplet detection.
While these automated models performed reasonably well in learning to predict the fat droplets, there were a substantial number of subjects which were confusing to the machine leading to poor annotation performance. Through this project, we aim to test the application of an emerging “human-in-the-loop” strategy where citizen scientists can help with providing critical information on those subjects that require the most attention. Furthermore, an imperfect machine learning model can provide an initial guess that the volunteers can use as a starting point and then provide any edits. In this way, we are hoping that the effort needed by the citizen scientists can be targeted towards where the AI fails. The machines can then be counted on to do the annotations they are good at while the volunteers do the ones they are good at! This should hopefully reduce the total amount of time it will take to get through the large amounts of data we continue to collect and help to accelerate biomedical research.
For our next steps, using the data from the Etch A Cell - Fat Checker projects, we are working towards building new infrastructure tools that will enable many more future projects to leverage both citizen science and machine learning towards solving critical research problems efficiently.
We have made upgrades to the existing freehand line drawing tool on Zooniverse, where users will now be able to edit a drawn shape, undo or redo during any stage of their drawing process, automatically close open shapes, and delete any drawings. Below is a visualization of an example drawing where the middle panel illustrates the editing state (indicated by the open dashed line) and re-drawn/edited shape. The tool box with undo, redo, auto-close, and delete functions is also shown.
We have also built new infrastructure that will enable researchers to upload outlines from machine learning (or any other automated method) in a format compatible with the freehand line tool. A volunteer on a Zooniverse project will be shown the pre-loaded machine outlines for each subject, which they can edit using the above enhanced freehand drawing tool. Once volunteers provide their corrected/edited annotations, their responses will be recorded and used by the research teams for their ensuing analyses. The figure below shows an example visualization of what a volunteer would see with the new correct-a-machine workflow. Note that the green outlines shown on top of the subject image are directly loaded from a machine model prediction and volunteers will be able to interact with them. Or, if a volunteer feels that the machines missed something that needed an annotation, a new annotation can be drawn by the volunteer.