I'm primarily a space project addict and I'll have a look at all that are put forward. Although I concentrate on a favourite ongoing few... such as Galaxy Zoo, Asteroid Zoo, Solar Stormwatch and Disc Detective. I've enjoyed the short lived projects too, as it feels good to see a completed job. I particularly liked Ice Hunters and the supernova one.
However, I also love other projects, so I often break from space to the earth bound type... Serengeti, Chicago, Seafloor even Worm Watch lol. The Higgs Hunters is a tough one, I enjoy the challenge but I think I'm rubbish! Haha!
I'm primarily a space project addict and I'll have a look at all that are put forward. Although I concentrate on a favourite ongoing few... such as Galaxy Zoo, Asteroid Zoo, Solar Stormwatch and Disc Detective. I've enjoyed the short lived projects too, as it feels good to see a completed job. I particularly liked Ice Hunters and the supernova one.
However, I also love other projects, so I often break from space to the earth bound type... Serengeti, Chicago, Seafloor even Worm Watch lol. The Higgs Hunters is a tough one, I enjoy the challenge but I think I'm rubbish! Haha!
40 Participants
55 Comments
First a bit of a rant - feel free to skip to get to the suggestions for your case below, but project owners really need to consider these things!
Welcome to the world of citizen science. No matter the task type there are variations in the responses between volunteers - even simple yes/no questions with a clear "correct" response will receive "incorrect" responses - due to fat fingers, malicious activity, inattention, and misunderstanding. The more true uncertainty exists as to the "correct" answer, the more likely there will be variation. Some of this uncertainty is inherent - even experts may not be "certain" of the answer, but much of the spread in results comes from the interpretation of the instructions, and the quality of the training examples and explanation of what is required.
So the project owner's first line of offense must be to make the task as simple and as repeatable by various individuals as possible - examples:
It is also important to consider what is being asked of the volunteers:
So how the variation in responses is handled depends very much on the task type and the science or meaning of the responses.
For simple questions or survey task selections it is common to use a vote fraction cut-off - consensus is if some minimum fraction of the volunteers chose the same response - 60% is very typical for a cut-off for this sort of task. Tasks with consensus are taken as "the" answer, those without consensus may be handled in other ways such as recycled for more classifications for handed off to an"expert", or simple handled statistically by grouping them in some distribution.
Transcription tasks - especially for short verbatim transcription of text shown in an image (examples museum labels or fields on hardprint forms) requires some form of reconciliation to choose the best response of those received. Reconciliation usually requires a rules based comparison, often with fuzzy matching such as Notes from Natures reconcile.py, and at best can clearly determine exact matches between volunteers as the "correct" answer, and weight fuzzy or partial matches in some way, usually displaying the results in some form where the basis for the final answer determined by the software can be easily understood and edited as needed. The longer the free transcription and the less constrained the responses the harder it is to reconcile to a "correct" version. Sometimes all the individual responses to free transcriptions are simply recorded in some searchable way, with no attempt to come down to a single response.
Drawing tools, your actual problem, can be aggregated in various ways. Most common is to use some sort of spatial point clustering using some simple clustering algorithm such as DBSCAN. This can be done on 2d and 3d points (including those for figures such as centroid or corners), though it can also be applied to areas or constructed quantities such as (area, x, y) where the constructed quantities are treated as points in some weird-dimensioned space. DBSCAN can be used on any set of data where some measure of nearness can be defined and calculated between two elements of the data set and is quite useful (example - using Levenshtein distance between strings of text as a measure of nearness allows even text strings to be clustered with a fuzzy matching determined by the clustering parameters).
Another way to aggregate figures is opacity - the idea here is you give the figure ( such as your rectangles) an opacity and place layers corresponding to each volunteers drawings in a stack, then look for the area of the whole stack which has some miminum opacity. This sounds a bit complex but is actually easy to do using some common image handling routines such as Open CV. This method effectively finds the common areas of the volunteers drawings. Using a second gate, one can measure the consistency of overlap which is a measure of the variation of the figures beyond the common area. I used this extensively for Worlds of Wonder where rectangles were drawn around illustrations with little constraint as to where the edges of a illustration were located, (and indeed if a plate was one figure or many smaller figures). Using opacity and image manipulation software also provides tools like thresholding, dilation/erosion and outer bound marking to detect and interpret overlapping figures.
Direct me to your project I perhaps I can give you some further ideas. I can provide the various Python scripts I have for this sort of thing but they are likely too specific to particular projects to be of much use beyond the ideas used.
First a bit of a rant - feel free to skip to get to the suggestions for your case below, but project owners really need to consider these things!
Welcome to the world of citizen science. No matter the task type there are variations in the responses between volunteers - even simple yes/no questions with a clear "correct" response will receive "incorrect" responses - due to fat fingers, malicious activity, inattention, and misunderstanding. The more true uncertainty exists as to the "correct" answer, the more likely there will be variation. Some of this uncertainty is inherent - even experts may not be "certain" of the answer, but much of the spread in results comes from the interpretation of the instructions, and the quality of the training examples and explanation of what is required.
So the project owner's first line of offense must be to make the task as simple and as repeatable by various individuals as possible - examples:
It is also important to consider what is being asked of the volunteers:
So how the variation in responses is handled depends very much on the task type and the science or meaning of the responses.
For simple questions or survey task selections it is common to use a vote fraction cut-off - consensus is if some minimum fraction of the volunteers chose the same response - 60% is very typical for a cut-off for this sort of task. Tasks with consensus are taken as "the" answer, those without consensus may be handled in other ways such as recycled for more classifications for handed off to an"expert", or simple handled statistically by grouping them in some distribution.
Transcription tasks - especially for short verbatim transcription of text shown in an image (examples museum labels or fields on hardprint forms) requires some form of reconciliation to choose the best response of those received. Reconciliation usually requires a rules based comparison, often with fuzzy matching such as Notes from Natures reconcile.py, and at best can clearly determine exact matches between volunteers as the "correct" answer, and weight fuzzy or partial matches in some way, usually displaying the results in some form where the basis for the final answer determined by the software can be easily understood and edited as needed. The longer the free transcription and the less constrained the responses the harder it is to reconcile to a "correct" version. Sometimes all the individual responses to free transcriptions are simply recorded in some searchable way, with no attempt to come down to a single response.
Drawing tools, your actual problem, can be aggregated in various ways. Most common is to use some sort of spatial point clustering using some simple clustering algorithm such as DBSCAN. This can be done on 2d and 3d points (including those for figures such as centroid or corners), though it can also be applied to areas or constructed quantities such as (area, x, y) where the constructed quantities are treated as points in some weird-dimensioned space. DBSCAN can be used on any set of data where some measure of nearness can be defined and calculated between two elements of the data set and is quite useful (example - using Levenshtein distance between strings of text as a measure of nearness allows even text strings to be clustered with a fuzzy matching determined by the clustering parameters).
Another way to aggregate figures is opacity - the idea here is you give the figure ( such as your rectangles) an opacity and place layers corresponding to each volunteers drawings in a stack, then look for the area of the whole stack which has some miminum opacity. This sounds a bit complex but is actually easy to do using some common image handling routines such as Open CV. This method effectively finds the common areas of the volunteers drawings. Using a second gate, one can measure the consistency of overlap which is a measure of the variation of the figures beyond the common area. I used this extensively for Worlds of Wonder where rectangles were drawn around illustrations with little constraint as to where the edges of a illustration were located, (and indeed if a plate was one figure or many smaller figures). Using opacity and image manipulation software also provides tools like thresholding, dilation/erosion and outer bound marking to detect and interpret overlapping figures.
Direct me to your project I perhaps I can give you some further ideas. I can provide the various Python scripts I have for this sort of thing but they are likely too specific to particular projects to be of much use beyond the ideas used.
3 Participants
4 Comments
How do I add a In-bound tutorial to my project? Some projects have in-bound tutorials. On the build page for them "Tutorial" shows up after "Collaborators" with, I assume a build page for the tutorial is behind that. How do I get started so a tutorial option is available?
How do I add a In-bound tutorial to my project? Some projects have in-bound tutorials. On the build page for them "Tutorial" shows up after "Collaborators" with, I assume a build page for the tutorial is behind that. How do I get started so a tutorial option is available?
2 Participants
3 Comments
Oops. Sorry, I meant sublimation.
I think I had subduction on my mind because I was working on those non-volcanoes
Personally, I think there is no subduction to speak of as a heat source although you can see obvious crushing and land deformation at Wright Mons, so some heating is occurring at these points of contact but subduction is what happens here on earth with our rocky core. At Wright Mons the scene looks more like a bulldoser plowing into a muddy berm. Yes there's some heat from this but subduction? layers sliding under each other don't seem to fit the scene. Thats my deduction about subduction and it's brief but that's cause I don't see the evidence for it. Do you? When you look at a glacier on earth (frozen ice moving at a snails pace rolling over terrain crushing rocks, do you envision this rock hard ice as having a solid rocky core at any location. Nope. It just makes sense that the ice is too dense to allow the dust to accrete. Same thing on Pluto
I see Pluto as having pockets of dense fluid trapped in ice cracks where some space dust and tholin is bound to be trapped in these area's under the terrain as the convection process would fold some of the tar like tholin into warmed pockets that cool and drop back down below the surface. I still feel the rocky core model doesn't work out here with these temperatures and the relatively low gravitational pressure and no tidal flexing as an energy source for separating out space dust and other particles.
The dark red tholin around the equator (which also covers the entire planet) speaks to a long time of peace on Pluto, during this peaceful time small objects regularly impacted Pluto building it larger and larger but the impacts while melting some surface materials couldn't deliver enough energy to heat the core. Look at the western dried basin on page 5. Some healthy sized objects impacted at this site, they were large enough to create a really large basin which filled with fluid. Later that fluid drained away and left a foot print. The impacts at the dried basin, demonstrate how impacts fracture the land allow softer flowing warmed up material to release from near the surface but without demonstrating much in the way of penetrating effects.
So Pluto was built up slowly over time with less damaging impacts which would not have delivered enough energy to penetrate to its core (at these temperatures and pressures) which otherwise would have allowed particles to separate out and become a rocky core like on earth. The thick tar like tholin says Pluto was not geologically active for a really long time. Then bang something changed dramatically. Sputnik Planum came into existence and methane snow began releasing from the north. You know my theory on this.
We've all seen the animations on how our earth coalesced into a molten ball then cooled retaining its hot molten core. This scene doesn't work for Pluto. After Pluto reached a certain size its cold core remained cold. It's only what 40 degrees above absolute zero. The objects impacting Pluto were all big cold snow balls not rock hard packed dirt balls. Our earth bakes in the sun and for earth a rocky core makes sense but not on Pluto. To me, the evidence points clearly away from subduction.
I think there's some interesting stuff occurring in the patterns displayed by sublimation.
Oops. Sorry, I meant sublimation.
I think I had subduction on my mind because I was working on those non-volcanoes
Personally, I think there is no subduction to speak of as a heat source although you can see obvious crushing and land deformation at Wright Mons, so some heating is occurring at these points of contact but subduction is what happens here on earth with our rocky core. At Wright Mons the scene looks more like a bulldoser plowing into a muddy berm. Yes there's some heat from this but subduction? layers sliding under each other don't seem to fit the scene. Thats my deduction about subduction and it's brief but that's cause I don't see the evidence for it. Do you? When you look at a glacier on earth (frozen ice moving at a snails pace rolling over terrain crushing rocks, do you envision this rock hard ice as having a solid rocky core at any location. Nope. It just makes sense that the ice is too dense to allow the dust to accrete. Same thing on Pluto
I see Pluto as having pockets of dense fluid trapped in ice cracks where some space dust and tholin is bound to be trapped in these area's under the terrain as the convection process would fold some of the tar like tholin into warmed pockets that cool and drop back down below the surface. I still feel the rocky core model doesn't work out here with these temperatures and the relatively low gravitational pressure and no tidal flexing as an energy source for separating out space dust and other particles.
The dark red tholin around the equator (which also covers the entire planet) speaks to a long time of peace on Pluto, during this peaceful time small objects regularly impacted Pluto building it larger and larger but the impacts while melting some surface materials couldn't deliver enough energy to heat the core. Look at the western dried basin on page 5. Some healthy sized objects impacted at this site, they were large enough to create a really large basin which filled with fluid. Later that fluid drained away and left a foot print. The impacts at the dried basin, demonstrate how impacts fracture the land allow softer flowing warmed up material to release from near the surface but without demonstrating much in the way of penetrating effects.
So Pluto was built up slowly over time with less damaging impacts which would not have delivered enough energy to penetrate to its core (at these temperatures and pressures) which otherwise would have allowed particles to separate out and become a rocky core like on earth. The thick tar like tholin says Pluto was not geologically active for a really long time. Then bang something changed dramatically. Sputnik Planum came into existence and methane snow began releasing from the north. You know my theory on this.
We've all seen the animations on how our earth coalesced into a molten ball then cooled retaining its hot molten core. This scene doesn't work for Pluto. After Pluto reached a certain size its cold core remained cold. It's only what 40 degrees above absolute zero. The objects impacting Pluto were all big cold snow balls not rock hard packed dirt balls. Our earth bakes in the sun and for earth a rocky core makes sense but not on Pluto. To me, the evidence points clearly away from subduction.
I think there's some interesting stuff occurring in the patterns displayed by sublimation.
26 Participants
579 Comments
There was a similar post here The part of interest for you is
Drawing tools, your actual problem, can be aggregated in various ways. Most common is to use some sort of spatial point clustering using some simple clustering algorithm such as DBSCAN. This can be done on 2d and 3d points (including those for figures such as centroid or corners), though it can also be applied to areas or constructed quantities such as (area, x, y) where the constructed quantities are treated as points in some weird-dimensioned space. DBSCAN can be used on any set of data where some measure of nearness can be defined and calculated between two elements of the data set and is quite useful (example - using Levenshtein distance between strings of text as a measure of nearness allows even text strings to be clustered with a fuzzy matching determined by the clustering parameters).
Another way to aggregate figures is opacity - the idea here is you give the figure ( such as your rectangles) an opacity and place layers corresponding to each volunteers drawings in a stack, then look for the area of the whole stack which has some miminum opacity. This sounds a bit complex but is actually easy to do using some common image handling routines such as Open CV. This method effectively finds the common areas of the volunteers drawings. Using a second gate, one can measure the consistency of overlap which is a measure of the variation of the figures beyond the common area. I used this extensively for Worlds of Wonder where rectangles were drawn around illustrations with little constraint as to where the edges of a illustration were located, (and indeed if a plate was one figure or many smaller figures). Using opacity and image manipulation software also provides tools like thresholding, dilation/erosion and outer bound marking to detect and interpret overlapping figures.
For rectangle specifically you can:
While I have not worked with the zooniverse aggregation script I would have expected that it would give you a list for all the x's, another for all the y's and again lists for the h and values of all the rectangles drawn for a specific tool type such as "blue" rectangles, with additional lists for any other tools such as "red" rectangles. Every rectangle should have four values, and aggregation should return four lists for each tool. If not then you need to flatten and aggregate using a custom script such as this one developed for Whales as Individuals.
There was a similar post here The part of interest for you is
Drawing tools, your actual problem, can be aggregated in various ways. Most common is to use some sort of spatial point clustering using some simple clustering algorithm such as DBSCAN. This can be done on 2d and 3d points (including those for figures such as centroid or corners), though it can also be applied to areas or constructed quantities such as (area, x, y) where the constructed quantities are treated as points in some weird-dimensioned space. DBSCAN can be used on any set of data where some measure of nearness can be defined and calculated between two elements of the data set and is quite useful (example - using Levenshtein distance between strings of text as a measure of nearness allows even text strings to be clustered with a fuzzy matching determined by the clustering parameters).
Another way to aggregate figures is opacity - the idea here is you give the figure ( such as your rectangles) an opacity and place layers corresponding to each volunteers drawings in a stack, then look for the area of the whole stack which has some miminum opacity. This sounds a bit complex but is actually easy to do using some common image handling routines such as Open CV. This method effectively finds the common areas of the volunteers drawings. Using a second gate, one can measure the consistency of overlap which is a measure of the variation of the figures beyond the common area. I used this extensively for Worlds of Wonder where rectangles were drawn around illustrations with little constraint as to where the edges of a illustration were located, (and indeed if a plate was one figure or many smaller figures). Using opacity and image manipulation software also provides tools like thresholding, dilation/erosion and outer bound marking to detect and interpret overlapping figures.
For rectangle specifically you can:
While I have not worked with the zooniverse aggregation script I would have expected that it would give you a list for all the x's, another for all the y's and again lists for the h and values of all the rectangles drawn for a specific tool type such as "blue" rectangles, with additional lists for any other tools such as "red" rectangles. Every rectangle should have four values, and aggregation should return four lists for each tool. If not then you need to flatten and aggregate using a custom script such as this one developed for Whales as Individuals.
4 Participants
5 Comments
An example image of a gazelle
Detected animal using the background comparison technique
Zooniverse's outgoing Data Scientist Greg Hines wrote the following post about how we can use comparison to an average background to detect animals in camera trap images.
Suppose you have a series of images taken over a period of time from a fixed location. You want to know if there is something in each of those images. For example, you have a webcam set up that regularly takes a photo of a room - does anyone enter that room?
If you have a gold standard blank image and you know that the only thing that can change is someone entering the room - the solution is simple. If there is any difference between one of the images and the blank - someone is there. But what if other things can change? For example, lighting - there might be a window in the room. Or for something like Snapshot Serengeti we could be looking at a bunch of trees - the leaves could be blowing in the background. That's technically movement but not the kind we want.
Snapshot Serengeti provides timestamps and locations for all images so we can look at a time series of images. There is a tradeoff - the more images we have in our time series the accurate our calculations can be. But things change over time - grass, trees and leafs grow and die. So the time series probably wouldn't last months - probably more just days at most. We should also remove night time images - images where the average brightness is less than some certain threshold. We'll then read in the images ::
axis = 0
time_series = []
for fname in glob.glob("/home/ggdhines/Databases/images/time_series/*.jpg"):
img = cv2.imread(fname)[:,:,axis]
equ = cv2.equalizeHist(img)
f = equ.astype(float)
time_series.append(f)
axis = 0 means that we are only reading in the R values (out of the RGB values) - we could also read the images in grayscale. (Just experimenting with stuff). Equalizing the img (cv2.equalizeHist) (http://docs.opencv.org/3.1.0/d5/daf/tutorial_py_histogram_equalization.html#gsc.tab=0) helps to account for differences in lighting. We can look at the average image with :
mean_image = np.mean(time_series,axis=0)
plt.imshow(mean_image)
The calculated average background image
We can also calculate "percentile images" :
upper_bound = np.percentile(time_series,80,axis=0)
which gives us at each pixel the 80th percentile value. (i.e. 80 percent of the values at the pixel in our time series are less than or equal to this value). Similarly we can calculate the lower bounds :
lower_bound = np.percentile(time_series,20,axis=0)
Let's read in the image again and look for places where we have "extreme" pixels - pixels that lie below the 20th percentile or above the 80th ::
template = np.zeros(img.shape,np.uint8)
t2 = np.where(np.logical_or(equ>upper_bound , equ < lower_bound))
template[t2] = 255
Finally we apply an opening operation to remove isolated points (noise) - (http://docs.opencv.org/3.1.0/d9/d61/tutorial_py_morphological_ops.html#gsc.tab=0):
opening = cv2.morphologyEx(template, cv2.MORPH_OPEN, kernel)
The full code is at - https://github.com/zooniverse/aggregation/blob/master/time.py
Below are some examples - there are some false positives where a change in the sky is detected (we could filter out sky pixels) but false positives aren't bad. We see that animals are definitely detected. If we did DB scan we could look for clumps of "extreme" pixels - if there are none, we have a blank image.
This post was originally posted here. Please review the example images thatt follow, then add comments below and tell us what you think!
Here are some examples. In each case, the first image is the original captured photo, and the second one shows blue dots for the "detected change from background average".
... Post continues below ...
An example image of a gazelle
Detected animal using the background comparison technique
Zooniverse's outgoing Data Scientist Greg Hines wrote the following post about how we can use comparison to an average background to detect animals in camera trap images.
Suppose you have a series of images taken over a period of time from a fixed location. You want to know if there is something in each of those images. For example, you have a webcam set up that regularly takes a photo of a room - does anyone enter that room?
If you have a gold standard blank image and you know that the only thing that can change is someone entering the room - the solution is simple. If there is any difference between one of the images and the blank - someone is there. But what if other things can change? For example, lighting - there might be a window in the room. Or for something like Snapshot Serengeti we could be looking at a bunch of trees - the leaves could be blowing in the background. That's technically movement but not the kind we want.
Snapshot Serengeti provides timestamps and locations for all images so we can look at a time series of images. There is a tradeoff - the more images we have in our time series the accurate our calculations can be. But things change over time - grass, trees and leafs grow and die. So the time series probably wouldn't last months - probably more just days at most. We should also remove night time images - images where the average brightness is less than some certain threshold. We'll then read in the images ::
axis = 0
time_series = []
for fname in glob.glob("/home/ggdhines/Databases/images/time_series/*.jpg"):
img = cv2.imread(fname)[:,:,axis]
equ = cv2.equalizeHist(img)
f = equ.astype(float)
time_series.append(f)
axis = 0 means that we are only reading in the R values (out of the RGB values) - we could also read the images in grayscale. (Just experimenting with stuff). Equalizing the img (cv2.equalizeHist) (http://docs.opencv.org/3.1.0/d5/daf/tutorial_py_histogram_equalization.html#gsc.tab=0) helps to account for differences in lighting. We can look at the average image with :
mean_image = np.mean(time_series,axis=0)
plt.imshow(mean_image)
The calculated average background image
We can also calculate "percentile images" :
upper_bound = np.percentile(time_series,80,axis=0)
which gives us at each pixel the 80th percentile value. (i.e. 80 percent of the values at the pixel in our time series are less than or equal to this value). Similarly we can calculate the lower bounds :
lower_bound = np.percentile(time_series,20,axis=0)
Let's read in the image again and look for places where we have "extreme" pixels - pixels that lie below the 20th percentile or above the 80th ::
template = np.zeros(img.shape,np.uint8)
t2 = np.where(np.logical_or(equ>upper_bound , equ < lower_bound))
template[t2] = 255
Finally we apply an opening operation to remove isolated points (noise) - (http://docs.opencv.org/3.1.0/d9/d61/tutorial_py_morphological_ops.html#gsc.tab=0):
opening = cv2.morphologyEx(template, cv2.MORPH_OPEN, kernel)
The full code is at - https://github.com/zooniverse/aggregation/blob/master/time.py
Below are some examples - there are some false positives where a change in the sky is detected (we could filter out sky pixels) but false positives aren't bad. We see that animals are definitely detected. If we did DB scan we could look for clumps of "extreme" pixels - if there are none, we have a blank image.
This post was originally posted here. Please review the example images thatt follow, then add comments below and tell us what you think!
Here are some examples. In each case, the first image is the original captured photo, and the second one shows blue dots for the "detected change from background average".
... Post continues below ...
2 Participants
5 Comments
New paper:
The fates of Solar system analogues with one additional distant planet
Author: Dimitri Veras
The potential existence of a distant planet ("Planet Nine") in the Solar system has prompted a re-think about the evolution of planetary systems. As the Sun transitions from a main sequence star into a white dwarf, Jupiter, Saturn, Uranus and Neptune are currently assumed to survive in expanded but otherwise unchanged orbits. However, a sufficiently-distant and sufficiently-massive extra planet would alter this quiescent end scenario through the combined effects of Solar giant branch mass loss and Galactic tides. Here, I estimate bounds for the mass and orbit of a distant extra planet that would incite future instability in systems with a Sun-like star and giant planets with masses and orbits equivalent to those of Jupiter, Saturn, Uranus and Neptune. I find that this boundary is diffuse and strongly dependent on each of the distant planet's orbital parameters. (...)
(Submitted on 26 Aug 2016)
https://arxiv.org/abs/1608.07580
New paper:
The fates of Solar system analogues with one additional distant planet
Author: Dimitri Veras
The potential existence of a distant planet ("Planet Nine") in the Solar system has prompted a re-think about the evolution of planetary systems. As the Sun transitions from a main sequence star into a white dwarf, Jupiter, Saturn, Uranus and Neptune are currently assumed to survive in expanded but otherwise unchanged orbits. However, a sufficiently-distant and sufficiently-massive extra planet would alter this quiescent end scenario through the combined effects of Solar giant branch mass loss and Galactic tides. Here, I estimate bounds for the mass and orbit of a distant extra planet that would incite future instability in systems with a Sun-like star and giant planets with masses and orbits equivalent to those of Jupiter, Saturn, Uranus and Neptune. I find that this boundary is diffuse and strongly dependent on each of the distant planet's orbital parameters. (...)
(Submitted on 26 Aug 2016)
https://arxiv.org/abs/1608.07580
19 Participants
81 Comments
Hi everyone,
I recently launched my Jungle Weather project and and a recurrent question is one for more "context" surrounding values to transcribe. I present values (cells) from a larger table as small cut-outs and they don't always provide enough clues to writing style or other confounding factors (e.g. the vertical line below).
Similarly, another task on the same image presented a cut-out of the header from which to transcribe set values.
I was wondering if in Panoptes someone is working on dynamic subsets of the same image, where instead of cutting the table into pieces, i.e. "subjects", coordinate boxes on the original full image can be used. This would solve various problems:
In addition it would allow re-purposing images on which bounding boxes were annotated, which then can be re-used for further evaluation (with no additional uploads).
A lot of these problems seem recurrent in a weather rescue projects, but I can also see text transcription benefit from these dynamic subsets.
Has any of this ever been considered as part of the project builder or is this generally too complex?
Cheers,
Koen
Hi everyone,
I recently launched my Jungle Weather project and and a recurrent question is one for more "context" surrounding values to transcribe. I present values (cells) from a larger table as small cut-outs and they don't always provide enough clues to writing style or other confounding factors (e.g. the vertical line below).
Similarly, another task on the same image presented a cut-out of the header from which to transcribe set values.
I was wondering if in Panoptes someone is working on dynamic subsets of the same image, where instead of cutting the table into pieces, i.e. "subjects", coordinate boxes on the original full image can be used. This would solve various problems:
In addition it would allow re-purposing images on which bounding boxes were annotated, which then can be re-used for further evaluation (with no additional uploads).
A lot of these problems seem recurrent in a weather rescue projects, but I can also see text transcription benefit from these dynamic subsets.
Has any of this ever been considered as part of the project builder or is this generally too complex?
Cheers,
Koen
3 Participants
5 Comments
Using Chrome in Win 10. On the fan/blotch screen I can not select the area. On click and draw, a portion of the image moves but the tool bounds can not be seen/identified. Do I need to set anything else?
Using Chrome in Win 10. On the fan/blotch screen I can not select the area. On click and draw, a portion of the image moves but the tool bounds can not be seen/identified. Do I need to set anything else?
20 Participants
47 Comments
Whether you're interested in out of this world topics or ones closer to home, there's bound to be one or more Zooniverse projects that is the right fit for you. What projects are you involved in? Which are your favorites?
Cheers,
~Meg
Whether you're interested in out of this world topics or ones closer to home, there's bound to be one or more Zooniverse projects that is the right fit for you. What projects are you involved in? Which are your favorites?
Cheers,
~Meg
40 Participants
55 Comments