Skip to content

The inference speed on pad #5979

@zengjie617789

Description

@zengjie617789

I implement my code with ncnn in pad, here are pad hardware configurations:

处理器
骁龙685处理器
CPU
4 x A73 (2.8GHz) + 4 x A53 (1.9GHz)

The speed reach to 3.9s which is much different than 500ms in phone.
The ncnn settings are the same to the phone.

 ncnn::set_cpu_powersave(4);
 model.retina_opt.use_bf16_storage = true;

  model.opt.lightmode = true;
  model.opt.num_threads = 4;
  model.opt.blob_allocator = &retina_g_blob_pool_allocator;
  model.opt.workspace_allocator = &retina_g_workspace_pool_allocator;

Any help will be appreciated in advance.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions