-
-
Notifications
You must be signed in to change notification settings - Fork 425
Refactor bytecode representation #4220
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Test262 conformance changes
|
This is not finished in any way. I just wanted to put it out to get some feedback on the overal concept. To get an overview / feeling for the changes:
|
e45496f
to
55c225e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really liking the direction of this PR, simplifying the arguments and generating the opcode functions is a great step forward. 😄
That said, I did notice that the bytecode size increases by about 3x (based on checking the combined.js output). While some of that overhead might be reduced by encoding multiple instructions into a single one, it's still likely to end up at least twice as large overall. Additionally, splitting the bytecode into two arrays could have a negative impact on cache locality.
Overall, I think the approach we're taking is aligned with what engines like V8 and JavaScriptCore are doing. There's a great article from the JavaScriptCore team that touches on a similar idea with prefix opcodes: A new bytecode format for JavaScriptCore.
In terms of performance, I suspect the bigger issue isn't so much unaligned reads, but rather how we read arguments — currently it's done one at a time, with a bounds check on each access. We might see a noticeable performance boost if we check bounds ahead of time and read the arguments in bulk.
Would love to hear your thoughts! :)
EDIT: Here is the code I used to get the size :)
This PR explores a new bytecode representation.
Currently the bytecode is encoded in one
[u8]
list. All opcodes (u8
) and arguments are encoded in that list. This means that we have to perform a lot of individual, unaligned reads on that list. For example, when we read an opcode with two registers, we currently perform three unaligned reads.This PR moves the bytecode to a fixed 64 bit instruction list with arguments either encoded in the instruction or spilling over to a
[u32]
list. Oneu64
instruction contains an opcode (u8
) a flag representing the format of the arguments (u8
) and either inline arguments or the index and argument number of the arguments in the spillover list.In my local benchmarks this seems to have a positive impact on performance. Generally all benchmarks score higher, with the overall score being 307 -> 322.
One drawback is that there is some wasted space in the instructions. How much depends on the opcode and the arguments format, but since most opcodes use at least two registers, this is less of a concern than it might have been previously. In addition, I can imagine some reductions in wasted space, by fitting multiple opcodes into one
u64
if possible. This would require a non integerpc
and adjusting thepatch
code, but should be possible.In addition to this change in bytecode encoding, I took the chance to add two further changes:
emit
functions for every opcode that can be used in the bytecompiler to get rid of error prone manualemit
code.CompletionType
s into the opcode code itself. This results in theCompletionType
enum being removed. This moves the handling code out of the hot loop that is iterating trough the opcodes. Also many opcodes can only return a limited possibility of completions. Moving the handling into the opcodes enables a more specific handling, in some cases basically removing any handling at all.