Hi all, the 2nd of my Internet radio quick reviews of Zooniverse audio projects using the NVDA screen reader aired just after Christmas, but is available on the TGV shows page of our brand new website that's in the same place as our old website:
https://www.theglobalvoice.info
It's also still currently available at the old link
http://www.theglobalvoice.info/gallery.php?show=oas
This time I reviewed Earthquake Detective. It's only a half hour show, so I didn't get to review as much as I wanted, but I covered all the practice data sets and a tiny bit of the current workflow.
The show is a monthly half-hour one called Odds and Sods. This review/walkthrough is geared toward blind screen reader users interested in participating in Zooniverse projects, particularly users advanced enough to use some keyboard navigation shortcuts, but I thought I'd post it here and mention all of you for anyone interested in hearing how I interact with a project. Again, the answers in the training workflow, I assume, are visual only, showing up only in the spectrograms, and I mention a bit of confusion as there are quite similar clips in the noise and none of the above sections, but going by the sounds in the tutorial I would have called those noise, even though I was unsure in the weeks before I had discovered the training buttons. I suggest again now that comments on training subject talk pages would probably be a good enough for government work solution, but there needs to be something in the tutorial directing screen reader users to talk pages when in doubt about a training clip. P.S. @grahamdove y'all are next.
@eatyourgreens @Yli @EcceruElme @christingle @borisroesler @Vivitang @Pmason
Hi all, the 2nd of my Internet radio quick reviews of Zooniverse audio projects using the NVDA screen reader aired just after Christmas, but is available on the TGV shows page of our brand new website that's in the same place as our old website:
https://www.theglobalvoice.info
It's also still currently available at the old link
http://www.theglobalvoice.info/gallery.php?show=oas
This time I reviewed Earthquake Detective. It's only a half hour show, so I didn't get to review as much as I wanted, but I covered all the practice data sets and a tiny bit of the current workflow.
The show is a monthly half-hour one called Odds and Sods. This review/walkthrough is geared toward blind screen reader users interested in participating in Zooniverse projects, particularly users advanced enough to use some keyboard navigation shortcuts, but I thought I'd post it here and mention all of you for anyone interested in hearing how I interact with a project. Again, the answers in the training workflow, I assume, are visual only, showing up only in the spectrograms, and I mention a bit of confusion as there are quite similar clips in the noise and none of the above sections, but going by the sounds in the tutorial I would have called those noise, even though I was unsure in the weeks before I had discovered the training buttons. I suggest again now that comments on training subject talk pages would probably be a good enough for government work solution, but there needs to be something in the tutorial directing screen reader users to talk pages when in doubt about a training clip. P.S. @grahamdove y'all are next.
@eatyourgreens @Yli @EcceruElme @christingle @borisroesler @Vivitang @Pmason
7 Participants
30 Comments
For video, a couple helpful reference projects (with 10-15 second video clips) are:
https://www.zooniverse.org/projects/sassydumbledore/chimp-and-see
https://www.zooniverse.org/projects/canagica/battling-birds
For video, a couple helpful reference projects (with 10-15 second video clips) are:
https://www.zooniverse.org/projects/sassydumbledore/chimp-and-see
https://www.zooniverse.org/projects/canagica/battling-birds
11 Participants
26 Comments
Wow, thank you very much for your feedback @kb7clx . We will definitely try to get rid of clips that are too short before they enter the list of audios to be classified. Regarding the field guide position, I am not sure of how this can be prevented from ending up hidden below Privacy Policy etc? I thought the position was default, but that might not be the case? Asking the Zooniverse folks here!
Thanks again for taking the time to review our project!
Wow, thank you very much for your feedback @kb7clx . We will definitely try to get rid of clips that are too short before they enter the list of audios to be classified. Regarding the field guide position, I am not sure of how this can be prevented from ending up hidden below Privacy Policy etc? I thought the position was default, but that might not be the case? Asking the Zooniverse folks here!
Thanks again for taking the time to review our project!
7 Participants
30 Comments
I have not worked with drawing tools on spectrograms but I expect, based on how things work with drawing tools on images in general that the data you are seeing will give you the start and stop time the rectangle extends over with a bit of scaling.
In general, on images, the drawing tools point locations ( such as corners of rectangles or centroids of ellipses or simple points) are given in pixels with the origin the top left corner, x increasing to the right and y increasing down.
A spectrogram is basically a fixed png image with a animated cursor overlaid on the image which "moves" in sync with an audio file. So, for instance, if you use "show image" or "save image" commands on a spectrogram you capture the base .png file. You can look at this image to determine the pixel size. I would expect that the drawing tool coordinates refer to pixel counts of this base .png image in the normal pixel coordinate system. For a rectangle I would have expected a y value and height as well but these may be suppressed for spectrograms where their interpretation is not very clear compared to the x axis which is simply related to time in the audio file.
IF this is the case, I believe you can get the start and stop edges of the rectangle from the data - in your example "x":438.1483154296875 may be the pixel location of the left hand edge of the rectangle and "width":45.33642578125 may give the right hand edge at 438.1+45.3 = 483.4 pixels. To convert to time I believe you would divide these numbers by the pixel width of the ,png image (for several of your subjects I got the width as 1169 pixels) and multiply by the total nominal duration of the audio clip.
This will be fairly easy to test - draw a rectangle from the full left margin of the spectrogram to the full right margin and verify the x value is 0 and the width is the .png image pixel count. Then try a few rectangles from easily compared extents - left edge to center, one quarter to three quarters etc and verify the x and width vales correspond to what you would expect for pixel values given the overall size of the .png image.
I have not worked with drawing tools on spectrograms but I expect, based on how things work with drawing tools on images in general that the data you are seeing will give you the start and stop time the rectangle extends over with a bit of scaling.
In general, on images, the drawing tools point locations ( such as corners of rectangles or centroids of ellipses or simple points) are given in pixels with the origin the top left corner, x increasing to the right and y increasing down.
A spectrogram is basically a fixed png image with a animated cursor overlaid on the image which "moves" in sync with an audio file. So, for instance, if you use "show image" or "save image" commands on a spectrogram you capture the base .png file. You can look at this image to determine the pixel size. I would expect that the drawing tool coordinates refer to pixel counts of this base .png image in the normal pixel coordinate system. For a rectangle I would have expected a y value and height as well but these may be suppressed for spectrograms where their interpretation is not very clear compared to the x axis which is simply related to time in the audio file.
IF this is the case, I believe you can get the start and stop edges of the rectangle from the data - in your example "x":438.1483154296875 may be the pixel location of the left hand edge of the rectangle and "width":45.33642578125 may give the right hand edge at 438.1+45.3 = 483.4 pixels. To convert to time I believe you would divide these numbers by the pixel width of the ,png image (for several of your subjects I got the width as 1169 pixels) and multiply by the total nominal duration of the audio clip.
This will be fairly easy to test - draw a rectangle from the full left margin of the spectrogram to the full right margin and verify the x value is 0 and the width is the .png image pixel count. Then try a few rectangles from easily compared extents - left edge to center, one quarter to three quarters etc and verify the x and width vales correspond to what you would expect for pixel values given the overall size of the .png image.
4 Participants
14 Comments
If I were designing it, I would allow audio to play only within the markers once a marker is placed, and let the user inch the marker closer and closer like in an audio editor by moving the cursor with rewind and pause and play. Each time he hits the same marker again it doesn't add a new marker but moves the working one. He can use control+A to select all and remove the marker if he goofed and went too far. When he's satisfied he's got both markers where he wants them, he hits F for finish and the whole clip again can be played, everything is in the selection. He can then pick another section to mark, wash, rinse, repeat until all are marked. Each time he finishes marking something, he can check the radio button of the call that matches what he just marked, click next, and mark the next section.
If I were designing it, I would allow audio to play only within the markers once a marker is placed, and let the user inch the marker closer and closer like in an audio editor by moving the cursor with rewind and pause and play. Each time he hits the same marker again it doesn't add a new marker but moves the working one. He can use control+A to select all and remove the marker if he goofed and went too far. When he's satisfied he's got both markers where he wants them, he hits F for finish and the whole clip again can be played, everything is in the selection. He can then pick another section to mark, wash, rinse, repeat until all are marked. Each time he finishes marking something, he can check the radio button of the call that matches what he just marked, click next, and mark the next section.
4 Participants
14 Comments
Great to see your post and your school's interest.
LOTS of ways for volunteer groups to participate. If you go to https://www.zooniverse.org/projects, you can see the list of all currently active projects. Does your community have a particular interest area? You can use the filter to sort to a specific topic -- space, nature, language, etc.
For example, we recently had a group of 50 people interested in volunteering on projects related to environmental sustainability. This was the process they followed:
Note - they felt it was easier to provide a shorter list of projects to choose from, but this isn't necessary.
Also note - every project has a stats page showing the expected time to completion for the current dataset. E.g., https://www.zooniverse.org/projects/penguintom79/penguin-watch/stats. It gives you a sense for whether the project will have data that needs classification on the day of your event.
Note -- Their full-day event included other experiences, but they'd set aside 2 hours (1-hour in the morning and 1-hour in the afternoon) specifically for classifying.
50 people x ~100 classifications / hour (though this can vary quite widely from project to project) x 2 hours = 10,000 classifications
b) the 4 recommended projects -- clicked on the landing page for each and on the 'About' page for each (and into the 'research', 'team', and 'results' tabs) to share a brief 2 minute overview about each project.
They said this was a really important part of the experience -- talking about things they'd seen while classifying, what it meant to them personally to participate in the projects, etc.
Reflection Questions:
PLEASE do post here on what approach you use and how the experience goes for you. It'll be great to hear and share with others. Thank you!!
Note -- I recommend you remove your phone number (and possibly your email address too) from your post above -- if you click edit on your post, you can remove that line.
Great to see your post and your school's interest.
LOTS of ways for volunteer groups to participate. If you go to https://www.zooniverse.org/projects, you can see the list of all currently active projects. Does your community have a particular interest area? You can use the filter to sort to a specific topic -- space, nature, language, etc.
For example, we recently had a group of 50 people interested in volunteering on projects related to environmental sustainability. This was the process they followed:
Note - they felt it was easier to provide a shorter list of projects to choose from, but this isn't necessary.
Also note - every project has a stats page showing the expected time to completion for the current dataset. E.g., https://www.zooniverse.org/projects/penguintom79/penguin-watch/stats. It gives you a sense for whether the project will have data that needs classification on the day of your event.
Note -- Their full-day event included other experiences, but they'd set aside 2 hours (1-hour in the morning and 1-hour in the afternoon) specifically for classifying.
50 people x ~100 classifications / hour (though this can vary quite widely from project to project) x 2 hours = 10,000 classifications
b) the 4 recommended projects -- clicked on the landing page for each and on the 'About' page for each (and into the 'research', 'team', and 'results' tabs) to share a brief 2 minute overview about each project.
They said this was a really important part of the experience -- talking about things they'd seen while classifying, what it meant to them personally to participate in the projects, etc.
Reflection Questions:
PLEASE do post here on what approach you use and how the experience goes for you. It'll be great to hear and share with others. Thank you!!
Note -- I recommend you remove your phone number (and possibly your email address too) from your post above -- if you click edit on your post, you can remove that line.
5 Participants
8 Comments
I auditioned a clip which started with the audience response to the end of a song, then was mostly silent with just a tiny bit of talking at the end. The UI wouldn't allow me to proceed past my selection of "audience applause".
I auditioned a clip which started with the audience response to the end of a song, then was mostly silent with just a tiny bit of talking at the end. The UI wouldn't allow me to proceed past my selection of "audience applause".
1 Participant
1 Comment
I’m trying to build a project that has users listen to short audio clips and label any species they hear in the audio on an accompanying spectrogram (using the rectangle tool).
So far I’ve had pretty good success setting this up but I do have a few issues:
Is it possible to display an audio file and spectrogram as separate frames on the same task without having them be automatically joined together?
Also, I’ve noticed that you can draw boxes in the space of the audio player with my current arrangement. Does that mean any of the y-coordinates I get are likely to be offset from the spectrogram?
Any help or suggestions would be much appreciated.
I’m trying to build a project that has users listen to short audio clips and label any species they hear in the audio on an accompanying spectrogram (using the rectangle tool).
So far I’ve had pretty good success setting this up but I do have a few issues:
Is it possible to display an audio file and spectrogram as separate frames on the same task without having them be automatically joined together?
Also, I’ve noticed that you can draw boxes in the space of the audio player with my current arrangement. Does that mean any of the y-coordinates I get are likely to be offset from the spectrogram?
Any help or suggestions would be much appreciated.
2 Participants
4 Comments
Apologies for the late reply I somehow missed this one.
There was a discussion previously on this involving @darkeshard A link to that discussion is here
I think you need to talk directly to the zooniverse developers since some of these tools are experimental and have some quirks.
I am a bit confused how you would like to use the drawing tool to label species - where you intending to have several colours of rectangles each for one species?
I am envisioning that the audio clip can display more than one species and you want to mark which species is heard when?
Apologies for the late reply I somehow missed this one.
There was a discussion previously on this involving @darkeshard A link to that discussion is here
I think you need to talk directly to the zooniverse developers since some of these tools are experimental and have some quirks.
I am a bit confused how you would like to use the drawing tool to label species - where you intending to have several colours of rectangles each for one species?
I am envisioning that the audio clip can display more than one species and you want to mark which species is heard when?
2 Participants
4 Comments
I'm fortunate to live along the California coast and on occasion (Spring and Fall) view the migration of humpbacks and greys from the bluff on my walks. Right now (Fall) is the southerly migration to Baja region. Here's an extraordinary clip recently filmed about an hour south of here. "Wales as Individuals" is fun and intriguing project
https://www.youtube.com/watch?v=T2Xsfb4cT9Y
I'm fortunate to live along the California coast and on occasion (Spring and Fall) view the migration of humpbacks and greys from the bluff on my walks. Right now (Fall) is the southerly migration to Baja region. Here's an extraordinary clip recently filmed about an hour south of here. "Wales as Individuals" is fun and intriguing project
https://www.youtube.com/watch?v=T2Xsfb4cT9Y
2 Participants
2 Comments