Skip to content
This repository was archived by the owner on Apr 17, 2023. It is now read-only.

Conversation

@teytaud
Copy link

@teytaud teytaud commented Feb 18, 2020

No description provided.

@facebook-github-bot facebook-github-bot added the CLA Signed Do not delete this pull request or issue due to inactivity. label Feb 18, 2020
import os
import json
from nevergrad.optimization import optimizerlib
from nevergrad.functions.multiobjective.core import MultiobjectiveFunction
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

from nevergrad.functions import MultiobjectiveFunction

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thx.

if nevergrad not in [None, 'CMA', 'DE', 'PSO',
'TwoPointsDE', 'PortfolioDiscreteOnePlusOne',
'DiscreteOnePlusOne', 'OnePlusOne']:
'DiscreteOnePlusOne', 'OnePlusOne', 'random', 'loss-covering', 'hypervolume', 'domain-covering']:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this mixes algorithms and selection mode, I dont get it and the docstring is not up to date.
Also, other modes dont seem to be used anyway since it loops on all selection options

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed, now there is an additional "moo", in which case we loop over various Pareto sampling modes.

gradientDecay = 0.1

nImages = input.size(0)
assert nImages == 1
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why always 1?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For not mixing the different multiobjective functions.
Anyway the original code, I believe, is not good for the case nImages > 1..

if not randomSearch:
loss.sum(dim=0).backward()

if nevergrad:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not modified, but: to check if it's None, always do if nevergrad is not None

assert nImages == 1
# assert randomSearch
thelosses = [1., 3., 3.] # These numbers should be discussed...
target = MultiobjectiveFunction(lambda x: thelosses, tuple(thelosses))
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is constant? maybe use lambda x:x? and then target(losses) instead of target.compute_aggregate_loss?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When thelosses will be updated, this function will return something different. Tested with:
y=[1]

def biz(x):
print(y)
return y

biz(2)
biz(2)
y=[3]
biz(2)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would have though that overriding y would unlink the external y from the one used internally but indeed...
Still, this is an ugly hack you should reaaaaaaally avoid doing for both clarity and avoid extremely weird bugs

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The point is that Nevergrad is not equipped for Ask&Tell in multiobjective mode. I did not find a better solution for now.

@teytaud
Copy link
Author

teytaud commented Feb 18, 2020

@Brozi :
for using the latest master, install Nevergrad with:
pip install git+https://github.com/facebookresearch/nevergrad@master#egg=nevergrad
I propose that we launch this soon together.


loss = -lambdaD * model.netD(noiseOut)[:, 0]
sumLoss += loss
combinedLoss[2] += loss

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can put just =


if nevergrad:
for i in range(nImages):
optimizers[i].tell(inps[i], float(sumLoss[i]))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you could have an if nevergrad == 'moo' to avoid breaking the code when optimizing hte mono objectif


else:
optimalVector = torch.where(sumLoss.view(-1, 1) < optimalLoss.view(-1, 1),
optimalVector = torch.where(combinedLoss.view(-1, 1) < optimalLoss.view(-1, 1),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you probably want combinedLoss.sum(dim=0) and same thing for optimalLoss here.
Otherwise you're taking parts of varNoise and parts of optimalLoss but those parts are not related to the losses that are better.

varNoise, optimalVector).detach()
optimalLoss = torch.where(sumLoss < optimalLoss,
sumLoss, optimalLoss).detach()
optimalLoss = torch.where(combinedLoss < optimalLoss,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same, I think you should also take .sum(dim=0)


print(str(iter) + " : " + formatCommand.format(
*["{:10.6f}".format(sumLoss[i].item())
*["{:10.6f}".format(combinedLoss[i].item())

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

combinedLoss[:,i]

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

CLA Signed Do not delete this pull request or issue due to inactivity.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants