Currently the CpuSet struct consists of a single cpu_set_t (or cpuset_t on *BSD), and sched_*etaffinity syscalls are called with constant cpusetsize parameter, always being mem::size_of::<CpuSet>() (=64 on all targets, at least those I care about) regardless of what the value of sysconf(_SC_NPROCESSORS_CONF) would be.
What I would like to see is some future-proof design, that would support for a runtime-sized allocation of cpu_set_t slice, similarly to what C macros such as CPU_ALLOC() facilitate.
For example, I can see this being abstracted behind CpuSet being an enum internally, that either has some default sized stack value, or dynamically sized slice on heap, and the affinity calls picking the logic depending on the variant. Or there could be two types like CpuSet and CpuSetDyn implementing the same (unsafe?) trait, providing cpusetsize and a pointer to the first cpuset_t, making the affinity fns generic over this trait.
Currently the
CpuSetstruct consists of a singlecpu_set_t(orcpuset_ton *BSD), andsched_*etaffinitysyscalls are called with constantcpusetsizeparameter, always beingmem::size_of::<CpuSet>()(=64 on all targets, at least those I care about) regardless of what the value ofsysconf(_SC_NPROCESSORS_CONF)would be.What I would like to see is some future-proof design, that would support for a runtime-sized allocation of
cpu_set_tslice, similarly to what C macros such asCPU_ALLOC()facilitate.For example, I can see this being abstracted behind
CpuSetbeing an enum internally, that either has some default sized stack value, or dynamically sized slice on heap, and the affinity calls picking the logic depending on the variant. Or there could be two types likeCpuSetandCpuSetDynimplementing the same (unsafe?) trait, providingcpusetsizeand a pointer to the firstcpuset_t, making the affinityfns generic over this trait.