Observing what looks like suboptimal codegen when loading and assigning Bool variables. It seems to undergo some transformation where I'm guessing it loses the information that it's a Bool type and upon assignment forces its value to [0, 1] by binary and, which at least naively to me seems redundant.
Example below, tested on 0.5.1 and 0.6.0-pre.alpha.
mutable struct Foo
flag1::Bool
flag2::Bool
end
function update(foo::Foo)
foo.flag2 = foo.flag1
end
code_native(update, (Foo,))
code_llvm(update, (Foo,))
LLVM output
define i8 @julia_update_60814(i8** dereferenceable(2)) #0 !dbg !5 {
top:
%1 = bitcast i8** %0 to i8*
%2 = load i8, i8* %1, align 16
%3 = getelementptr i8, i8* %1, i64 1
%4 = and i8 %2, 1
store i8 %4, i8* %3, align 1
ret i8 %4
}
Native x64 output
.text
Filename: In[1]
pushq %rbp
movq %rsp, %rbp
Source line: 9
movb (%rdi), %al
andb $1, %al
movb %al, 1(%rdi)
popq %rbp
retq
nopl (%rax)
Observing what looks like suboptimal codegen when loading and assigning
Boolvariables. It seems to undergo some transformation where I'm guessing it loses the information that it's aBooltype and upon assignment forces its value to[0, 1]by binary and, which at least naively to me seems redundant.Example below, tested on
0.5.1and0.6.0-pre.alpha.LLVM output
Native x64 output