-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Wikitext - [WIP] #150
base: development
Are you sure you want to change the base?
Wikitext - [WIP] #150
Conversation
- added dependencies, benchmark - added token generation and model training code from moasha in dependencies
- returning prediction time for evaluation time - changed perplexity --> log_perplexity for the objective (MO-ASHA uses log perplexity) changed error --> accuracy - added tqdm
-report train and eval time separately in objective func -code formatting -added test file
- added recipe and container file
…al encoding doesn't work for odd number, therefor log seems like perfect solution -removed logs
* Update Github Actions Workflow and drop support for singularity < 3.7
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left some commments. could you please have a look at them. Thanks
hpobench/container/recipes/mo/Singularity.LanguageModelBenchmark
Outdated
Show resolved
Hide resolved
hpobench/container/recipes/mo/Singularity.LanguageModelBenchmark
Outdated
Show resolved
Hide resolved
hpobench/container/recipes/mo/Singularity.LanguageModelBenchmark
Outdated
Show resolved
Hide resolved
* Add yahpo_gym w help from phmueller Co-authored-by: PhMueller <[email protected]>
Update the Nasbench201 benchmark to support Multi-Objective queries. If you want to use the *single objective* Nasbench201 benchmark, you can query the SO version of this benchmark. Although, we have not changed the benchmark logic, you can also use the container v0.0.5 in your experiments to reproduce results from the old version of this benchmark.
We add the benchmark from the MO-ASHA paper by Schmucker et al. It is a MO benchmark, training an MLP on the Adult data set.
Added mo cnn benchmarks from bag of baseline paper We deviate from the original benchmark in two points: * we return as cost only the training time instead of the total elapsed time * we return as objective for minimization instead of `-100 * accuracy` now `1 - accuracy` to achieve better output scalings. Co-authored-by: ayushi-3536 <[email protected]> Co-authored-by: Philipp Müller <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it would be cool if you could go through the comments. Thanks
class TransformerModel(nn.Module): | ||
"""Container module with an encoder, a transformer module, and a decoder.""" | ||
|
||
def __init__(self, ntoken, ninp, nhead, nhid, nlayers, dropout=0.5, bptt=35, rng=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
signature if possible
-add dependency version
- added dependencies, benchmark - added token generation and model training code from moasha in dependencies
- returning prediction time for evaluation time - changed perplexity --> log_perplexity for the objective (MO-ASHA uses log perplexity) changed error --> accuracy - added tqdm
-report train and eval time separately in objective func -code formatting -added test file
- added recipe and container file
…al encoding doesn't work for odd number, therefor log seems like perfect solution -removed logs
-add dependency version
No description provided.