Skip to content

Ask about the setting of n_jobs #404

@Lamed-git

Description

@Lamed-git

In order to speed up machine learning, I specify my own custom pipelines as follows:
from automatminer import get_preset_config, TPOTAdaptor, MatPipe
config = get_preset_config("express")
config["learner"] = TPOTAdaptor(max_time_mins=6000, n_jobs=36)

But when I use the top command to look for Python process, I find python only use one core when it start "FeatureReducer: Starting fitting." this step, This does not use multiple cores to perform operations like the AutoFeaturizer step. I don't know if it is my incorrect parameter setting or the program itself. I hope my question can be answered, thank you very much!
In addition, if this method cannot make the program parallel and then speed up, I would like to ask if there are other reasonable methods that can be used to speed up machine learning.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions