Thank you for the reply.
Yeah, I did not even think about that at all. Works better with computer games I suppose. Couldn't it be done in a way where a researcher distributed badges based on researchers they thought merited it? In this way, there is no target to aim for, so there is no competition, but good contributions will still be rewarded. This way, there is also less rivalry, less pressure. This would be in line with the suggestions of the article you linked "Design personalized feedback into the game as a means to recognize quality rather than quantity. "
I think of this as I received an email from one of your interns saying something like I done such a good job that I doubled the numbers of data received in one day on the galaxy bars project. Now, I have only done 200 or so galaxy pictures and from what I seen 6000+ have been completed in total so I know that this statement is simply not true, yet I couldn't help but feel a little fuzzy by the personalised feedback (Even my own brain, knowing it is probably false, likes to fool itself for a compliment every now and again)
Anyway, I hope I haven't wasted your time and thank you for the reply, the article was also an interesting read!
I also notice, your profile has your own picture...I have no idea how to do that yet haha...so that's my bad. Off to figure out that right now.
Anyway, fantastic job with the zooniverse. Understand, I only write, this not out of frustration, but only because I enjoy these projects, learning about new topics and I would like to put forward a few ideas and contribute in anyway.
Thank you for the reply.
Yeah, I did not even think about that at all. Works better with computer games I suppose. Couldn't it be done in a way where a researcher distributed badges based on researchers they thought merited it? In this way, there is no target to aim for, so there is no competition, but good contributions will still be rewarded. This way, there is also less rivalry, less pressure. This would be in line with the suggestions of the article you linked "Design personalized feedback into the game as a means to recognize quality rather than quantity. "
I think of this as I received an email from one of your interns saying something like I done such a good job that I doubled the numbers of data received in one day on the galaxy bars project. Now, I have only done 200 or so galaxy pictures and from what I seen 6000+ have been completed in total so I know that this statement is simply not true, yet I couldn't help but feel a little fuzzy by the personalised feedback (Even my own brain, knowing it is probably false, likes to fool itself for a compliment every now and again)
Anyway, I hope I haven't wasted your time and thank you for the reply, the article was also an interesting read!
I also notice, your profile has your own picture...I have no idea how to do that yet haha...so that's my bad. Off to figure out that right now.
Anyway, fantastic job with the zooniverse. Understand, I only write, this not out of frustration, but only because I enjoy these projects, learning about new topics and I would like to put forward a few ideas and contribute in anyway.
4 Participants
8 Comments
What a neat idea, WillowSkye
1} OK. here's one that's a bit of a logic puzzle for you (don't shoot the messenger):
Long ago, a man was captured in the deep, dark jungle by renegade sloths (hey, it can happen.) They told the man: "Make a statement - if you speak the truth, we will hang you. If you say something false, we will shoot you." The man thought it over for awhile and then uttered a sentence that confused the sloths so totally that they threw their hands ...paws...up and let him go. What did he say?
2} .....and this one is by J.R.R. Tolkien (really!): "A box without hinges, key or lid. Yet golden treasure inside is hid."
What a neat idea, WillowSkye
1} OK. here's one that's a bit of a logic puzzle for you (don't shoot the messenger):
Long ago, a man was captured in the deep, dark jungle by renegade sloths (hey, it can happen.) They told the man: "Make a statement - if you speak the truth, we will hang you. If you say something false, we will shoot you." The man thought it over for awhile and then uttered a sentence that confused the sloths so totally that they threw their hands ...paws...up and let him go. What did he say?
2} .....and this one is by J.R.R. Tolkien (really!): "A box without hinges, key or lid. Yet golden treasure inside is hid."
10 Participants
39 Comments
For no. 1, my first guess is "Everything I say is false."
No. 2 will take more thought.
For no. 1, my first guess is "Everything I say is false."
No. 2 will take more thought.
10 Participants
39 Comments
Actually, @inge24, "You will shoot me" and Everything I say is false" are precisely the same statement - we are either both right or both wrong.
"Egg" as an answer will teach me to not assume all boxes are square with corners.
Actually, @inge24, "You will shoot me" and Everything I say is false" are precisely the same statement - we are either both right or both wrong.
"Egg" as an answer will teach me to not assume all boxes are square with corners.
10 Participants
39 Comments
It is a shame the change hasn't solved also the false "It appears we've run out of data!" message problem related to automatic signing-out.(
It is a shame the change hasn't solved also the false "It appears we've run out of data!" message problem related to automatic signing-out.(
48 Participants
226 Comments
I'm happy that someone else reports the problem of "We're out of data!" false notification which is definitely a bug, so it finally won't look like it is just me. It always appears at the moment when the page signs me out and isn't able to sign me in (I never sign out by myself on my computer).
I'm happy that someone else reports the problem of "We're out of data!" false notification which is definitely a bug, so it finally won't look like it is just me. It always appears at the moment when the page signs me out and isn't able to sign me in (I never sign out by myself on my computer).
14 Participants
27 Comments
Nope, it is a common problem and not only for PP. It isn't on your side. It signs me out, and then give me the false NO DATA TO CLASSIFY message quite often. And others report the same problems too.
Nope, it is a common problem and not only for PP. It isn't on your side. It signs me out, and then give me the false NO DATA TO CLASSIFY message quite often. And others report the same problems too.
14 Participants
27 Comments
No need for these false accusations.
Private messages are just that : PRIVATE.
No need for these false accusations.
Private messages are just that : PRIVATE.
3 Participants
27 Comments
An example image of a gazelle
Detected animal using the background comparison technique
Zooniverse's outgoing Data Scientist Greg Hines wrote the following post about how we can use comparison to an average background to detect animals in camera trap images.
Suppose you have a series of images taken over a period of time from a fixed location. You want to know if there is something in each of those images. For example, you have a webcam set up that regularly takes a photo of a room - does anyone enter that room?
If you have a gold standard blank image and you know that the only thing that can change is someone entering the room - the solution is simple. If there is any difference between one of the images and the blank - someone is there. But what if other things can change? For example, lighting - there might be a window in the room. Or for something like Snapshot Serengeti we could be looking at a bunch of trees - the leaves could be blowing in the background. That's technically movement but not the kind we want.
Snapshot Serengeti provides timestamps and locations for all images so we can look at a time series of images. There is a tradeoff - the more images we have in our time series the accurate our calculations can be. But things change over time - grass, trees and leafs grow and die. So the time series probably wouldn't last months - probably more just days at most. We should also remove night time images - images where the average brightness is less than some certain threshold. We'll then read in the images ::
axis = 0
time_series = []
for fname in glob.glob("/home/ggdhines/Databases/images/time_series/*.jpg"):
img = cv2.imread(fname)[:,:,axis]
equ = cv2.equalizeHist(img)
f = equ.astype(float)
time_series.append(f)
axis = 0 means that we are only reading in the R values (out of the RGB values) - we could also read the images in grayscale. (Just experimenting with stuff). Equalizing the img (cv2.equalizeHist) (http://docs.opencv.org/3.1.0/d5/daf/tutorial_py_histogram_equalization.html#gsc.tab=0) helps to account for differences in lighting. We can look at the average image with :
mean_image = np.mean(time_series,axis=0)
plt.imshow(mean_image)
The calculated average background image
We can also calculate "percentile images" :
upper_bound = np.percentile(time_series,80,axis=0)
which gives us at each pixel the 80th percentile value. (i.e. 80 percent of the values at the pixel in our time series are less than or equal to this value). Similarly we can calculate the lower bounds :
lower_bound = np.percentile(time_series,20,axis=0)
Let's read in the image again and look for places where we have "extreme" pixels - pixels that lie below the 20th percentile or above the 80th ::
template = np.zeros(img.shape,np.uint8)
t2 = np.where(np.logical_or(equ>upper_bound , equ < lower_bound))
template[t2] = 255
Finally we apply an opening operation to remove isolated points (noise) - (http://docs.opencv.org/3.1.0/d9/d61/tutorial_py_morphological_ops.html#gsc.tab=0):
opening = cv2.morphologyEx(template, cv2.MORPH_OPEN, kernel)
The full code is at - https://github.com/zooniverse/aggregation/blob/master/time.py
Below are some examples - there are some false positives where a change in the sky is detected (we could filter out sky pixels) but false positives aren't bad. We see that animals are definitely detected. If we did DB scan we could look for clumps of "extreme" pixels - if there are none, we have a blank image.
This post was originally posted here. Please review the example images thatt follow, then add comments below and tell us what you think!
Here are some examples. In each case, the first image is the original captured photo, and the second one shows blue dots for the "detected change from background average".
... Post continues below ...
An example image of a gazelle
Detected animal using the background comparison technique
Zooniverse's outgoing Data Scientist Greg Hines wrote the following post about how we can use comparison to an average background to detect animals in camera trap images.
Suppose you have a series of images taken over a period of time from a fixed location. You want to know if there is something in each of those images. For example, you have a webcam set up that regularly takes a photo of a room - does anyone enter that room?
If you have a gold standard blank image and you know that the only thing that can change is someone entering the room - the solution is simple. If there is any difference between one of the images and the blank - someone is there. But what if other things can change? For example, lighting - there might be a window in the room. Or for something like Snapshot Serengeti we could be looking at a bunch of trees - the leaves could be blowing in the background. That's technically movement but not the kind we want.
Snapshot Serengeti provides timestamps and locations for all images so we can look at a time series of images. There is a tradeoff - the more images we have in our time series the accurate our calculations can be. But things change over time - grass, trees and leafs grow and die. So the time series probably wouldn't last months - probably more just days at most. We should also remove night time images - images where the average brightness is less than some certain threshold. We'll then read in the images ::
axis = 0
time_series = []
for fname in glob.glob("/home/ggdhines/Databases/images/time_series/*.jpg"):
img = cv2.imread(fname)[:,:,axis]
equ = cv2.equalizeHist(img)
f = equ.astype(float)
time_series.append(f)
axis = 0 means that we are only reading in the R values (out of the RGB values) - we could also read the images in grayscale. (Just experimenting with stuff). Equalizing the img (cv2.equalizeHist) (http://docs.opencv.org/3.1.0/d5/daf/tutorial_py_histogram_equalization.html#gsc.tab=0) helps to account for differences in lighting. We can look at the average image with :
mean_image = np.mean(time_series,axis=0)
plt.imshow(mean_image)
The calculated average background image
We can also calculate "percentile images" :
upper_bound = np.percentile(time_series,80,axis=0)
which gives us at each pixel the 80th percentile value. (i.e. 80 percent of the values at the pixel in our time series are less than or equal to this value). Similarly we can calculate the lower bounds :
lower_bound = np.percentile(time_series,20,axis=0)
Let's read in the image again and look for places where we have "extreme" pixels - pixels that lie below the 20th percentile or above the 80th ::
template = np.zeros(img.shape,np.uint8)
t2 = np.where(np.logical_or(equ>upper_bound , equ < lower_bound))
template[t2] = 255
Finally we apply an opening operation to remove isolated points (noise) - (http://docs.opencv.org/3.1.0/d9/d61/tutorial_py_morphological_ops.html#gsc.tab=0):
opening = cv2.morphologyEx(template, cv2.MORPH_OPEN, kernel)
The full code is at - https://github.com/zooniverse/aggregation/blob/master/time.py
Below are some examples - there are some false positives where a change in the sky is detected (we could filter out sky pixels) but false positives aren't bad. We see that animals are definitely detected. If we did DB scan we could look for clumps of "extreme" pixels - if there are none, we have a blank image.
This post was originally posted here. Please review the example images thatt follow, then add comments below and tell us what you think!
Here are some examples. In each case, the first image is the original captured photo, and the second one shows blue dots for the "detected change from background average".
... Post continues below ...
2 Participants
5 Comments
In reply to kellinora's comment:
Great point Jean. I think that kind of very deep content level best lives within the project Talk boards. However if there are educators asking here for this kind of information we'll definitely direct them to where that information lives. We certainly would encourage any moderators and zooites interested in having conversations about educational applications of any Zooniverse project to do so on the Education board. We're excited to finally have a dedicated space to have these conversations!
Thanks kellinora!
There are, I think, many facets to this, perhaps pointing to internet-based sources of learning (etc) that are not currently being studied, or perhaps even recognized. Two examples:
In reply to kellinora's comment:
Great point Jean. I think that kind of very deep content level best lives within the project Talk boards. However if there are educators asking here for this kind of information we'll definitely direct them to where that information lives. We certainly would encourage any moderators and zooites interested in having conversations about educational applications of any Zooniverse project to do so on the Education board. We're excited to finally have a dedicated space to have these conversations!
Thanks kellinora!
There are, I think, many facets to this, perhaps pointing to internet-based sources of learning (etc) that are not currently being studied, or perhaps even recognized. Two examples:
4 Participants
10 Comments