-
-
Notifications
You must be signed in to change notification settings - Fork 59
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance of PER codec #244
Comments
Thank you for your issue! Would you be able to attach the flamegraphs? That would be very helpful. In general the initial version I made prioritised correctness over performance so there's likely a lot of places where it could be more efficient. I already know that I haven't added the empty struct optimisation that was added to BER to PER. |
Seems like your test bench uses mostly integers. I think there was a plan to make inner implementation of integer types as feature, so that it would be possible to select different internal type for performance reasons if very big numbers are not required. |
Yeah, that's still a todo, I should write up an issue giving details in case someone else who has more time is interested. |
I think this is rather important optimisation in general and should not take too much work to implement, if we have, for example, just the i128 and BigInts as initial options. If you have time to write what you had on your mind, maybe I can try to do it. |
I have been reworking integer type (by using primitives (i128) by default, and switching to larger ones on overflows, or if big one is created manually) Hopefully I can open draft PR in the end of week. The resulting integer will be enum, and the type of the big integers does not matter that much anymore. Not sure if it is the best approach, but it is the one I chose after trying quite many different things. Let's leave those comments for the PR. It can be completely changed still. Seems like integers are not the only problem with UPER. The extensive use of new vector buffers and moving this data contributes more. Default low capacity's in vectors, a lot single pushes and overall creation of new buffers instead of sharing pointer of one or reusing existing allocations slows down quite much. Some initial differences on M2 Pro from the integer change: UPER: ![]() COER (The difference was much more impactful) ![]() I have also made initial rework for optimising allocations in COER (the results below are based on the int remake) Maybe UPER will follow if I have time. ![]() So by changing the integer type and reducing allocations, it was already possible to get 3x speedup at least for COER, based on the benchmark base of @dudycz . Allocations could be improved further but I am having painful issues with lifetimes. |
Yes, I can confirm big improvement in decoding time (on my "bench" setup I observed decrease from 457us to 341us!). Good job! |
Meanwhile I had updated my benchmark with third uper codec - asn1rs.
|
Separately I think we should add continuous profiling to the CI and I've created an issue for that. Since this issue doesn't have a specific goal/end, I'm going to move this to a discussion. |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
Hi. I have been playing with two most popular PER codecs:
rasn
andasn1-codecs
and made some benchmark comparing their performance. One thing I have noticed is that rasn can be ~10x slower in encoding in some complex cases:I looked into flamegraphs and callgrinds but I couldn't figure out what contributes to this big difference. If you're interested I could try to collect some and attach here.
Link to repo with benchmark: https://github.com/dudycz/asn1_codecs_bench
The text was updated successfully, but these errors were encountered: