-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
the score of AVA image #8
Comments
Hi,
Thanks for your interest!
As you may also found in other papers, the score distribution is mostly
like Gaussian (long tail) -- most images are around the middle point
(score-5 out of 10). To do regression, it is possible that one can blindly
assign all images to the middle-point score and still get good measurement,
say Euclidean error which is widely used in regression. But this is of
course not what we want.
Though modeling it as binary classification (high or low) can get avoid of
this somehow, the 2-way classification is not able to analyze most images
around score-5 -- it does not distinguish them. In practice, one may want
an automated mechanism to get the aesthetic score and edit it. Therefore we
can turn to other scientific measurement, as Spearman-rho used in our work,
and use rank loss to explicitly make their subtle difference more distinct.
So I'd suggest you consider pairwise rank loss to improve it and use other
measurements.
Regards,
Shu
…On Wed, Jun 28, 2017 at 12:59 AM, KZhou ***@***.***> wrote:
hi,
firstly thanks for sharing your nice job, i have done some tests of it.
now i want to do some experiments on ava database use a regression model,
but get stuck in calculating the image aesthetic score, i plotted the mean
score of images, it's almost a gaussian distribution, that's to say most
images' score is around 5 (10 is the highest score), any sugestion of
handle this problem? or when you train your model on ava database how did
you calculate the score?
may be i should take it as a classification problem of low/high aesthetic
quality?
thanks in advance~~
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#8>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AGKZJJDBSZa5xaV55lBLeXZh1ZShJbh3ks5sIgf_gaJpZM4OHnnz>
.
|
thanks for your reply. i'll read your paper to get more details , try pairwise rank loss and multi-task learning. |
@aimerykong hi, i am back! haha! not working on this for a long time. |
Hi,
Yes. When an image is scored 0.00, it means this image is not used for
either training or testing; because it might be painting, drawing or a
photo containing unhealthy content.
Regards,
Shu
…On Sun, Dec 10, 2017 at 8:05 PM, KZhou ***@***.***> wrote:
@aimerykong <https://github.com/aimerykong> hi, i am back! haha! not
working on this for a long time.
recently i checked the img 'train_score' file, found that there is some
strange scores( to me), like 'farm1_300_20203544192_66922b649b_b.jpg
0.000'. and i checked the image it's an oil painting. does it for some
special purpose?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#8 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AGKZJBQNpgNCfFC-qJKLxZqPnPznT6RPks5s_Kn5gaJpZM4OHnnz>
.
|
is this rule just for score labels, image style labels have no relation with the rule? |
I believe these images were not used for any training. This can be easily
done by finding those through the names and excluding them.
…On Mon, Dec 11, 2017 at 8:07 PM, KZhou ***@***.***> wrote:
is this rule just for score labels, image style labels have no relation
with it?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#8 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AGKZJO9rDpEZf9-kgT5FxHFQH9nwtRTgks5s_fwagaJpZM4OHnnz>
.
|
hmm, maybe i didn't make myself clear. |
In our AADB dataset, there are many 0's for attribute annotation, meaning
these attributes are not conveyed in the image. So when training a
classifier to recognize each attribute, we didn't include those 0-annotated
images. We only consider -1 and 1 as binary classification. We also do
sampling inverse to the frequency for better training.
…On Sun, Dec 10, 2017 at 8:05 PM, KZhou ***@***.***> wrote:
@aimerykong <https://github.com/aimerykong> hi, i am back! haha! not
working on this for a long time.
recently i checked the img 'train_score' file, found that there is some
strange scores( to me), like 'farm1_300_20203544192_66922b649b_b.jpg
0.000'. and i checked the image it's an oil painting. does it for some
special purpose?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#8 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AGKZJBQNpgNCfFC-qJKLxZqPnPznT6RPks5s_Kn5gaJpZM4OHnnz>
.
|
hi,
firstly thanks for sharing your nice job, i have done some tests of it.
now i want to do some experiments on ava database use a regression model, but get stuck in calculating the image aesthetic score, i plotted the mean score of images, it's almost a gaussian distribution, that's to say most images' score is around 5 (10 is the highest score), any sugestion of handle this problem? or when you train your model on ava database how did you calculate the score?
may be i should take it as a classification problem of low/high aesthetic quality?
thanks in advance~~
The text was updated successfully, but these errors were encountered: