Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cpr::ThreadPool high CPU usage when Paused #1035

Closed
baderouaich opened this issue Apr 1, 2024 · 4 comments · Fixed by #1039
Closed

cpr::ThreadPool high CPU usage when Paused #1035

baderouaich opened this issue Apr 1, 2024 · 4 comments · Fixed by #1039

Comments

@baderouaich
Copy link

Description

When Pausing cpr::ThreadPool, all the created threads become hyper active because of yielding, the expected behavior is the created threads to use little to none of the CPU.

Example/How to Reproduce

  1. Create and start a cpr::ThreadPool
  cpr::ThreadPool tp;
  tp.SetMinThreadNum(1);
  tp.SetMaxThreadNum(std::thread::hardware_concurrency());
  tp.Start(0);
  1. Submit some work to the thread pool so all the threads will be created
for(std::size_t i =  0; i < std::thread::hardware_concurrency(); ++i) {
    tp.Submit([i]() -> void {
      std::cout << "job #" << i << " started" << std::endl;
      std::this_thread::sleep_for(std::chrono::seconds(1)); // simulate actual work
      std::cout << "job #" << i << " finished" << std::endl;
    });
}
  1. Pause the thread pool
tp.Pause();
// Hold main thread to keep thread pool running
std::cin.get();
  1. Notice High CPU usage even after all jobs are completed; (Use htop or system monitor utility on linux, or task manager on windows..)

Complete example code:

#include <iostream>
#include <cpr/threadpool.h>

int main() {
  cpr::ThreadPool tp;
  tp.SetMinThreadNum(1);
  tp.SetMaxThreadNum(std::thread::hardware_concurrency());
  tp.Start(0);

  for(std::size_t i =  0; i < std::thread::hardware_concurrency(); ++i) {
    tp.Submit([i]() -> void {
      std::cout << "job #" << i << " started" << std::endl;
      std::this_thread::sleep_for(std::chrono::seconds(1)); // simulate actual work
      std::cout << "job #" << i << " finished" << std::endl;
    });
  }

  tp.Pause();
  // Hold main thread to keep thread pool running
  std::cin.get();
  
  return 0;
}

In my case, all CPUs went hyper active:
image
You can notice the main thread is [S] asleep due the std::cin.get(); where the 8 other threads are yielding and causing high cpu usage.
To make sure it's not an issue with htop utility, i tried with gnome system monitor which reported the same issue.
image

Possible Fix

Instead of yielding all threads on pause which has a bad reputation with CPU usage across multiple platforms, might as well use an std::condition_variable to wait when status is PAUSE, and to be notified when status is changed:
So instead of

while (status == PAUSE) {
  std::this_thread::yield();
}

It can be:

if(status == PAUSE) {
  std::unique_lock<std::mutex> locker(task_mutex);
  status_cond.wait(locker, [this](){
    return status != PAUSE;
  });
}

and every time status is updated, notify the status_cond which will wake up the thread to resume or stop..

I have tried it locally and it seems to work fine, if there is no alternative solution I will be glad to open a PR.

Where did you get it from?

GitHub (branch e.g. master)

Additional Context/Your Environment

  • OS: Ubuntu 22.04.1
  • Version: Linux HP 6.5.0-26-generic
@COM8
Copy link
Member

COM8 commented Apr 3, 2024

@baderouaich thanks for reporting. I will try to have a look at it over the weekend!

@COM8 COM8 self-assigned this Apr 3, 2024
@COM8
Copy link
Member

COM8 commented Apr 7, 2024

Found the cause:

while (status == PAUSE) {
    std::this_thread::yield();
}

A conditional variable based approach would be better. Working on it.

@COM8
Copy link
Member

COM8 commented Apr 7, 2024

PR for it is here: #1039

Could you please confirm this is fixed now.

@COM8
Copy link
Member

COM8 commented Apr 7, 2024

There is an other bug inside the thread pool I found during debugging this: #1040

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants