Think about how many time zones there are. Then there is the switch to and from Daylight Saving Time / Summer Time: some countries (or parts of countries) change on a different date than others, and some don't change at all. So the Zooniverse would have to know what time zone you are in, and also what country (or smaller region) you are in. And people in a place that changes time from winter to summer will have one 23 hour day and one 25 hour day each year. And what about people who travel from one time zone to another: how would the Zooniverse calculate the start and end of their day? And what about people that use VPNs or other ways to make their internet connection look like it's coming from a different location from where they actually are?
This suggestion sounds good on the surface but it opens a huge can of worms. The stats are accurate. They are just using a different time window than you are used to. It's not difficult at all to get used to thinking of your stats in terms of the "Zooniverse day" (i.e., UTC). Lots of people who interact with people in other time zones for their work or for social interaction do this routinely. You can learn to do it too: I bet you won't find it difficult if you think of it that way.
I'm currently working on a small-scale Galaxy Zoo project that will essentially serve as a beta for a future more big project. I'm currently testing it with my research group and some other volunteers and everything is going well.
With some of the initial results, I've calculated the consistency per user using the method outlined by Willett+2013 and have also attempted their weighting function. However, I'm not sure how effective their weighting function will be with my results as, so far, most users lie around 0.8-0.9 in consistency values. Does anyone have any advice here?
I've also considered implementing an 'experience' based weighting function that includes consistency too as my beta is still a bit difficult for newcomers. Does anyone know the effectiveness of these types of weighting functions? I've found a recent paper (Chandler+2024) that suggests that there is some merit to it, but I thought I would ask.
Other than that, I've had some issues with tied classifications as I'm currently using a low and even subject retirement count due to time constraints. I suspect this will be less of the case with a larger scale project and choosing an odd retirement count. I also suspect an 'experience' based weighting function may help here, but I'm unsure.
Thanks @a_allan. Documentation Detectives is a good illustration of how the separate scrolling facility can be useful in particular circumstances.
I've looked at that project. The subjects are interesting and I would have enjoyed transcribing them, but I don't have the patience for the underline-and-transcribe one line of text at a time method. Especially because the majority of their workflows have had Subjects with only a small amount of text which would have been more efficient to transcribe all at once in a text box IMO. (I don't know, but it's possible the one line at a time method is easier for post-classification processing by the researchers.) The did have some longer documents in at least on recent workflow though.
Here's a link to Documentation Detectives. The museum accession cards in that project are quite short, which makes the split scrolling a lot more useful.
I've been following this topic with interest as it's taken me a while to accustom to some of the FEM projects, and I agree with so many of the comments made.
You ask if anyone actually likes the separate scrolling behaviour, and personally I wouldn't go that far (not yet!) but I have noticed a couple of projects where I've found it useful. In "Documentation Detectives: Transcribing Accession Registers" (sorry, not sure how to link here) it's very helpful, possibly because there isn't a long 'task' list on the right, and once a transcription is completed, the two buttons required to finish the task are already aligned, and in fact it minimises any scrolling required.
I also noticed the same in the "Arctic Archives" project, which seems to have disappeared now, but I posted about it here https://www.zooniverse.org/talk/2354/3326062?comment=5465725&page=4 That's only 2 out of many projects, but maybe projects with a particular format (possibly with densely packed data in the image viewer but not long lists of task questions?) lend themselves more to this method. The ability to toggle between methods sounds ideal if possible.
one reason there is much more scrolling in FEM than PFE workflows is that text boxes and selection buttons are much taller in FEM, and there is more vertical space between elements too. Therefore the same set of questions makes for a much taller question page in FEM workflows. Expandable text boxes in PFE are the height of one character when the page loads, and expand only if you enter enough text so it has to wrap onto a second line. In my classifying experience, it's very common for the text not to need a second line. Expandable text boxes in FEM are the height of 3 characters, wasting a lot of page height if you only enter one or two words in most cases. Surely these taller elements are a deliberate design choice, but IMO the advantages are far outweighed by the disadvantage of requiring much more scrolling. One example: people complained in that long thread when the FEM version of the survey task was introduced (and also on talk in many projects) that they could no longer see the full table of animal choices, because the cells were taller, although the earlier (PFE) versions of the same project showed all the choices on their screen simultaneously.
regarding the annotate button, I mentioned it because I was speculating that since it's selected by default in all PFE workflows (even ones without drawing), it's absence might be the cause of the different way subject scrolling worked in FEM non-drawing workflows. If so so, putting it back even though it serves no purpose might resolve the problem. However, if I understand you correctly, it's unrelated.
I am finding it really hard to press Ctrl when I want to zoom the image, because I expect that to zoom the entire page. i use that on other sites frequently (although rarely on Zooniverse pages). It's just really weird to have a long-established keyboard shortcut act differently in this one situation.
What I really want is to eliminate the separate movement of the subject viewer and tasks altogether, so scrolling works as it does on other sites (and other Zooniverse pages): the whole page moves together.
I want to emphasize that I'm complaining about the movement of the subject viewing window separate from the tasks. I do not want to eliminate the ability to move the subject within the viewing window (as we can do in PFE projects). We have to actively put the focus on the Subject and then we can to move (and zoom) the image within the window, without altering the position (or size) of the rest of the page. But the subject viewing window always remains in a fixed position relative to the tasks. At least for me, this PFE setup is highly efficient when a classification has a tall image, or a long list of questions, or both.
Does anyone actually like the separate scrolling behavior? It makes me dizzy. Whenever one side moves without the other, at the least I have to pause to refocus my vision and attention. And I almost always have to waste time and hand/wrist movements struggling with the "reluctant to respond" scrolling to reposition the two sides into a usable alignment to continue answering the questions.
But if the separate scrolling is popular, perhaps you could add a toggle that classifiers could choose to allow the subject viewer and questions to scroll separately or to keep their relative positions fixed.
eatyourgreens has linked/noted that subject images with a taller/longer task area cause more scrolling than expected, compared to the subject and task area scrolling synchronously
am.zooni and eatyourgreens both note the asynchronous scrolling between the subject image and task area is disorienting, especially with tall portrait images and long task areas (though not limited to those specifics)
With so many different image sizes, orientations, classifier layouts, and task area sizes, I can see how the layout and scrolling can feel awkward at times. We’ll take another look at how this works and see if there’s a way to make it smoother. I’ll pass this feedback along to our designer and the rest of the team. Thank you!!
NfN-Capture the Collections is PFE. I agree that there are too many questions on the page in the workflow you link to. It's a pain in the neck to classify in that workflow (I personally bypass it and work only on other workflows in the project). Michael doesn't build workflows with such long pages, but some data providers have their own views about putting all the questions on one page rather than spreading them over two or more pages.
It's also true that occasionally a workflow has too many questions to fit in the height of my screen so I have to move the image, but it's a straight forward drag-down in PFE, while in FEM, sometimes I have to drag down but other times I have to drag up, because the movements of the image and question relative to each other is unpredictable. (At least, I have not identified any consistency in how they will move. However, as I mentioned, I am new at this. I have worked in FEM projects before, but only ones with short images and short question pages. I have avoided more complex workflows and tall images before now, but decided to try one now that mouse wheel zooming has been enabled.)
The FEM workflows with tall images that I've been trying are in NfN-CAS Plants to Pixels. (Both workflows have the same set of questions and same-dimension images.)
Jim describes the very disorienting and annoying stop-start action of the task/question movement far more clearly and concisely than I could.
These NfN-CAS workflows would require a fair amount of scrolling even if they were PFE, because first I have to go to the bottom of the image to check info in the metadata. (The metadata button is another annoyance: it takes a much bigger movement to reach it from the question panel than in PFE.) Then I have to look at two (or more) regions of the image to answer the questions, because the first two questions are about other information that is (almost always) located off the label, sometimes far away, even in the diagonally opposite corner. The pausing (or skidding as I had been thinking of it) had me thinking at first that my mouse wheel and PgUp/Down keys were simultaneously failing, but when I switched to a PFE workflow and also tried a few other websites, the scrolling hardware worked perfectly.
As I said, the weird movement (and sometimes unexpected movement of one side only) is disorienting. Plus it wastes time. Like so much in FEM: relative to PFE, functions still work, but they are more cumbersome, slower, require far more scrolling and pointer movement and clicking, and full of distractions from the task. From the viewpoint of this classifier, who wants to submit accurate classifications and spend my time answering the questions rather than fiddling with the UI, FEM is a downgrade, which is sad and frustrating.
You might need to reload a couple of times to get a tall page, but scrolling to the bottom of the page:
Starts to scroll both subject and task.
Pauses scrolling the page while the task scrolls.
Starts scrolling the page again when you reach the bottom of the task.
This makes getting to the bottom of the page harder than it really needs to be. In this particular case, this happens because there's a lot of text in the task box on the righthand side. Maybe the transcription task instructions can be made shorter, or hidden once I don't need to read them?
There's a yes/no question at the very bottom of the righthand column for transcription workflows, meaning you have to scroll down to it on every single subject (or just tab to it to bring it into view.)
Thank you for reporting this issue @ZngabitanT . Could you email contact@zooniverse.org or share here your operating system and browser versions? I'm not able to recreate the screenshots provided on my computer, but have virtual access to other OS and browser combinations.
The 2.0 workflows include a setting to limit the subject height, which I'm not sure is intended for the taller image 2.0 workflows; I'll follow-up with the Gravity Spy team accordingly to confirm or adjust that workflow setting, though it may not be related to the issue noted.
That may be because on FEM workflows of this type, there isn't an annotate button, while in PFE workflows, there is an annotate button
You're right that the image toolbar is different in FEM compared to PFE. In FEM, the annotate button was removed from workflows without a drawing task about a year ago. Many volunteers found it confusing since it didn’t serve a function in those workflows. I wouldn’t expect an annotate button to disable pan/zoom, which is what it's only function would be if included in the image toolbar for the noted workflows. That said, I’ll pass your suggestion along to our designer and the team for further consideration.
the highly annoying separate scrolling of the image and the task pane
I can see how this would be frustrating, especially with tall portrait images that have been zoomed or panned to a specific spot. Could you provide an example workflow? If you're referring to workflows often found within the NfN projects that's sufficient, no specifics necessary. If there's a common set of steps that illustrate this issue that would also be helpful. For NfN workflows I pan and zoom to the label and position it in the middle of the tasks, then tab through the various tasks. If the step has a lot of tasks (like in https://www.zooniverse.org/projects/cmnbotany/notes-from-nature-capture-the-collections/classify 's Whodunit? workflow), when I tab to the last tasks the page scrolls/shifts and I have to re-position the specimen label, which is not ideal, but happens in both PFE and FEM UIs for me, but I'm not sure I'm thinking of the combination of subject/workflow/tasks that bests illustrates the scrolling issue you note. Please let me know if there's a better example or specific steps to reproduce the issue.
Page of 1156
Talk is a place for Zooniverse volunteers and researchers to discuss their projects, collect and share data, and work together to make new discoveries.
March 29th 2025, 10:30 pm
This comment has been deleted
March 29th 2025, 8:43 pm
Think about how many time zones there are. Then there is the switch to and from Daylight Saving Time / Summer Time: some countries (or parts of countries) change on a different date than others, and some don't change at all. So the Zooniverse would have to know what time zone you are in, and also what country (or smaller region) you are in. And people in a place that changes time from winter to summer will have one 23 hour day and one 25 hour day each year. And what about people who travel from one time zone to another: how would the Zooniverse calculate the start and end of their day? And what about people that use VPNs or other ways to make their internet connection look like it's coming from a different location from where they actually are?
This suggestion sounds good on the surface but it opens a huge can of worms. The stats are accurate. They are just using a different time window than you are used to. It's not difficult at all to get used to thinking of your stats in terms of the "Zooniverse day" (i.e., UTC). Lots of people who interact with people in other time zones for their work or for social interaction do this routinely. You can learn to do it too: I bet you won't find it difficult if you think of it that way.
March 29th 2025, 4:56 am
This comment has been deleted
March 29th 2025, 12:01 am
Hi all!
I'm currently working on a small-scale Galaxy Zoo project that will essentially serve as a beta for a future more big project. I'm currently testing it with my research group and some other volunteers and everything is going well.
With some of the initial results, I've calculated the consistency per user using the method outlined by Willett+2013 and have also attempted their weighting function. However, I'm not sure how effective their weighting function will be with my results as, so far, most users lie around 0.8-0.9 in consistency values. Does anyone have any advice here?
I've also considered implementing an 'experience' based weighting function that includes consistency too as my beta is still a bit difficult for newcomers. Does anyone know the effectiveness of these types of weighting functions? I've found a recent paper (Chandler+2024) that suggests that there is some merit to it, but I thought I would ask.
Other than that, I've had some issues with tied classifications as I'm currently using a low and even subject retirement count due to time constraints. I suspect this will be less of the case with a larger scale project and choosing an odd retirement count. I also suspect an 'experience' based weighting function may help here, but I'm unsure.
Thank you for your time!
-Trevor
March 28th 2025, 1:56 pm
I also want to do this project
March 25th 2025, 9:37 pm
Thanks @a_allan. Documentation Detectives is a good illustration of how the separate scrolling facility can be useful in particular circumstances.
I've looked at that project. The subjects are interesting and I would have enjoyed transcribing them, but I don't have the patience for the underline-and-transcribe one line of text at a time method. Especially because the majority of their workflows have had Subjects with only a small amount of text which would have been more efficient to transcribe all at once in a text box IMO. (I don't know, but it's possible the one line at a time method is easier for post-classification processing by the researchers.) The did have some longer documents in at least on recent workflow though.
March 25th 2025, 5:08 pm
Here's a link to Documentation Detectives. The museum accession cards in that project are quite short, which makes the split scrolling a lot more useful.
March 25th 2025, 4:48 pm
I've been following this topic with interest as it's taken me a while to accustom to some of the FEM projects, and I agree with so many of the comments made.
You ask if anyone actually likes the separate scrolling behaviour, and personally I wouldn't go that far (not yet!) but I have noticed a couple of projects where I've found it useful. In "Documentation Detectives: Transcribing Accession Registers" (sorry, not sure how to link here) it's very helpful, possibly because there isn't a long 'task' list on the right, and once a transcription is completed, the two buttons required to finish the task are already aligned, and in fact it minimises any scrolling required.
I also noticed the same in the "Arctic Archives" project, which seems to have disappeared now, but I posted about it here https://www.zooniverse.org/talk/2354/3326062?comment=5465725&page=4 That's only 2 out of many projects, but maybe projects with a particular format (possibly with densely packed data in the image viewer but not long lists of task questions?) lend themselves more to this method. The ability to toggle between methods sounds ideal if possible.
March 24th 2025, 11:33 pm
This is an issue with externally hosted images. The research team has been contacted. Thanks!
March 24th 2025, 10:51 pm
Another couple of things
one reason there is much more scrolling in FEM than PFE workflows is that text boxes and selection buttons are much taller in FEM, and there is more vertical space between elements too. Therefore the same set of questions makes for a much taller question page in FEM workflows. Expandable text boxes in PFE are the height of one character when the page loads, and expand only if you enter enough text so it has to wrap onto a second line. In my classifying experience, it's very common for the text not to need a second line. Expandable text boxes in FEM are the height of 3 characters, wasting a lot of page height if you only enter one or two words in most cases. Surely these taller elements are a deliberate design choice, but IMO the advantages are far outweighed by the disadvantage of requiring much more scrolling. One example: people complained in that long thread when the FEM version of the survey task was introduced (and also on talk in many projects) that they could no longer see the full table of animal choices, because the cells were taller, although the earlier (PFE) versions of the same project showed all the choices on their screen simultaneously.
regarding the annotate button, I mentioned it because I was speculating that since it's selected by default in all PFE workflows (even ones without drawing), it's absence might be the cause of the different way subject scrolling worked in FEM non-drawing workflows. If so so, putting it back even though it serves no purpose might resolve the problem. However, if I understand you correctly, it's unrelated.
I am finding it really hard to press Ctrl when I want to zoom the image, because I expect that to zoom the entire page. i use that on other sites frequently (although rarely on Zooniverse pages). It's just really weird to have a long-established keyboard shortcut act differently in this one situation.
March 24th 2025, 10:30 pm
What I really want is to eliminate the separate movement of the subject viewer and tasks altogether, so scrolling works as it does on other sites (and other Zooniverse pages): the whole page moves together.
I want to emphasize that I'm complaining about the movement of the subject viewing window separate from the tasks. I do not want to eliminate the ability to move the subject within the viewing window (as we can do in PFE projects). We have to actively put the focus on the Subject and then we can to move (and zoom) the image within the window, without altering the position (or size) of the rest of the page. But the subject viewing window always remains in a fixed position relative to the tasks. At least for me, this PFE setup is highly efficient when a classification has a tall image, or a long list of questions, or both.
Does anyone actually like the separate scrolling behavior? It makes me dizzy. Whenever one side moves without the other, at the least I have to pause to refocus my vision and attention. And I almost always have to waste time and hand/wrist movements struggling with the "reluctant to respond" scrolling to reposition the two sides into a usable alignment to continue answering the questions.
But if the separate scrolling is popular, perhaps you could add a toggle that classifiers could choose to allow the subject viewer and questions to scroll separately or to keep their relative positions fixed.
March 24th 2025, 9:39 pm
Thank you @eatyourgreens and @am.zooni these descriptions are very helpful!
I think what I'm understanding is as follows:
With so many different image sizes, orientations, classifier layouts, and task area sizes, I can see how the layout and scrolling can feel awkward at times. We’ll take another look at how this works and see if there’s a way to make it smoother. I’ll pass this feedback along to our designer and the rest of the team. Thank you!!
March 24th 2025, 8:59 pm
Thank you for the device information and screenshots! I've opened https://github.com/zooniverse/front-end-monorepo/issues/6793 to document the issue and track it's status.
March 24th 2025, 3:53 pm
HMS NHS (also finished now) has workflows with lots of text boxes to fill out on the righthand side. The double page images there aren't so tall, though, so the scrolling problem isn't as evident. To reproduce the scrolling problem, you probably want to combine a workflow like HMS NHS (with many small tasks) with a tall subject like Beyond Borders.
https://www.zooniverse.org/projects/msalmon/hms-nhs-the-nautical-health-service/classify/workflow/18625/subject-set/83582
March 24th 2025, 2:24 pm
NfN-Capture the Collections is PFE. I agree that there are too many questions on the page in the workflow you link to. It's a pain in the neck to classify in that workflow (I personally bypass it and work only on other workflows in the project). Michael doesn't build workflows with such long pages, but some data providers have their own views about putting all the questions on one page rather than spreading them over two or more pages.
It's also true that occasionally a workflow has too many questions to fit in the height of my screen so I have to move the image, but it's a straight forward drag-down in PFE, while in FEM, sometimes I have to drag down but other times I have to drag up, because the movements of the image and question relative to each other is unpredictable. (At least, I have not identified any consistency in how they will move. However, as I mentioned, I am new at this. I have worked in FEM projects before, but only ones with short images and short question pages. I have avoided more complex workflows and tall images before now, but decided to try one now that mouse wheel zooming has been enabled.)
The FEM workflows with tall images that I've been trying are in NfN-CAS Plants to Pixels. (Both workflows have the same set of questions and same-dimension images.)
Jim describes the very disorienting and annoying stop-start action of the task/question movement far more clearly and concisely than I could.
These NfN-CAS workflows would require a fair amount of scrolling even if they were PFE, because first I have to go to the bottom of the image to check info in the metadata. (The metadata button is another annoyance: it takes a much bigger movement to reach it from the question panel than in PFE.) Then I have to look at two (or more) regions of the image to answer the questions, because the first two questions are about other information that is (almost always) located off the label, sometimes far away, even in the diagonally opposite corner. The pausing (or skidding as I had been thinking of it) had me thinking at first that my mouse wheel and PgUp/Down keys were simultaneously failing, but when I switched to a PFE workflow and also tried a few other websites, the scrolling hardware worked perfectly.
As I said, the weird movement (and sometimes unexpected movement of one side only) is disorienting. Plus it wastes time. Like so much in FEM: relative to PFE, functions still work, but they are more cumbersome, slower, require far more scrolling and pointer movement and clicking, and full of distractions from the task. From the viewpoint of this classifier, who wants to submit accurate classifications and spend my time answering the questions rather than fiddling with the UI, FEM is a downgrade, which is sad and frustrating.
March 24th 2025, 12:01 pm
Hi Mark!
Beyond Borders is finished now, but it has subjects where the split scroll leads to an irritating UX:
https://www.zooniverse.org/projects/mainehistory/beyond-borders-transcribing-historic-maine-land-documents/classify/workflow/18383
You might need to reload a couple of times to get a tall page, but scrolling to the bottom of the page:
This makes getting to the bottom of the page harder than it really needs to be. In this particular case, this happens because there's a lot of text in the task box on the righthand side. Maybe the transcription task instructions can be made shorter, or hidden once I don't need to read them?
There's a yes/no question at the very bottom of the righthand column for transcription workflows, meaning you have to scroll down to it on every single subject (or just tab to it to bring it into view.)
March 24th 2025, 9:48 am
Here the latest update from the Zooniverse neurodiversity research working group - a brief introduction to the topic and terminology:
"Neurodivergent? Neurotypical? Neurominority? Everyone is on the spectrum: Neurodiversity explained"
March 24th 2025, 7:51 am
Thanks again for following up @markbouslog! The above screenshots come from Safari 18.3.1 (18620.2.4.111.9, 18620) on macOS 13.7.4.
March 24th 2025, 6:44 am
Thank you for reporting this issue @ZngabitanT . Could you email contact@zooniverse.org or share here your operating system and browser versions? I'm not able to recreate the screenshots provided on my computer, but have virtual access to other OS and browser combinations.
The 2.0 workflows include a setting to limit the subject height, which I'm not sure is intended for the taller image 2.0 workflows; I'll follow-up with the Gravity Spy team accordingly to confirm or adjust that workflow setting, though it may not be related to the issue noted.
March 24th 2025, 6:20 am
You're right that the image toolbar is different in FEM compared to PFE. In FEM, the annotate button was removed from workflows without a drawing task about a year ago. Many volunteers found it confusing since it didn’t serve a function in those workflows. I wouldn’t expect an annotate button to disable pan/zoom, which is what it's only function would be if included in the image toolbar for the noted workflows. That said, I’ll pass your suggestion along to our designer and the team for further consideration.
I can see how this would be frustrating, especially with tall portrait images that have been zoomed or panned to a specific spot. Could you provide an example workflow? If you're referring to workflows often found within the NfN projects that's sufficient, no specifics necessary. If there's a common set of steps that illustrate this issue that would also be helpful. For NfN workflows I pan and zoom to the label and position it in the middle of the tasks, then tab through the various tasks. If the step has a lot of tasks (like in https://www.zooniverse.org/projects/cmnbotany/notes-from-nature-capture-the-collections/classify 's Whodunit? workflow), when I tab to the last tasks the page scrolls/shifts and I have to re-position the specimen label, which is not ideal, but happens in both PFE and FEM UIs for me, but I'm not sure I'm thinking of the combination of subject/workflow/tasks that bests illustrates the scrolling issue you note. Please let me know if there's a better example or specific steps to reproduce the issue.