-
-
Notifications
You must be signed in to change notification settings - Fork 3.1k
Description
Zig Version
0.15.0-dev.1039+e7b18a7ce
Steps to Reproduce and Observed Behavior
const std = @import("std");
pub fn main() void {
_ = &std.http.Client.fetch;
}This code takes over 6 seconds to compile on my computer (for comparison an empty program takes about 1.3 seconds):
$ time zig build-exe test.zig // 0.14.0
real 0m6,709s
user 0m5,894s
sys 0m0,861s
This does seem to get slightly better with #24429, however it still adds about 4 seconds to the compile time compared to the baseline:
$ time zig-debug build-exe test.zig -fllvm // 0.15.0-dev.1039+e7b18a7ce
real 0m5,306s
user 0m5,048s
sys 0m0,334s
And of course self hosted backend is faster in relation, but it's still 1.2 extra seconds:
$ time zig-debug build-exe test.zig // 0.15.0-dev.1039+e7b18a7ce
real 0m1,480s
user 0m2,113s
sys 0m0,431s
Insepcting the binary (here is the output of nm filtering for code sizes sorted by size (second column): out.txt ) we can see that basically all of the biggest functions are from std/crypto.
Furthermore, selectively looking at some of the code, it appears that there is a lot of comptime code duplication (e.g. via inline cases) and what appears to be manual loop unrolling going on there.
Expected Behavior
I think the standard library should cut down on the amount of code duplication and favor compilation speed and binary size over achieving the highest performance in some artificial benchmark.
I'm sure we can afford to lose a few percentage points in those benchmarks, if it means better (on the order of seconds) compile times and smaller binaries for everyone.
And smaller binaries might actually make real world applications faster, since there will be fewer (i-)cache misses.