Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance of PER codec #244

Closed
dudycz opened this issue Apr 10, 2024 · 10 comments
Closed

Performance of PER codec #244

dudycz opened this issue Apr 10, 2024 · 10 comments

Comments

@dudycz
Copy link

dudycz commented Apr 10, 2024

Hi. I have been playing with two most popular PER codecs: rasn and asn1-codecs and made some benchmark comparing their performance. One thing I have noticed is that rasn can be ~10x slower in encoding in some complex cases:

Codec Encoding (µs) Decoding (µs)
rasn 1916.7 826.8
asn1-codecs 208 145.5

I looked into flamegraphs and callgrinds but I couldn't figure out what contributes to this big difference. If you're interested I could try to collect some and attach here.

Link to repo with benchmark: https://github.com/dudycz/asn1_codecs_bench

@XAMPPRocky
Copy link
Collaborator

Thank you for your issue! Would you be able to attach the flamegraphs? That would be very helpful. In general the initial version I made prioritised correctness over performance so there's likely a lot of places where it could be more efficient. I already know that I haven't added the empty struct optimisation that was added to BER to PER.

@dudycz
Copy link
Author

dudycz commented Apr 10, 2024

I had to encode sample.asn 256 times to get something recorded in flamegraph. Here it is:
rasn

@Nicceboy
Copy link
Contributor

Nicceboy commented Apr 14, 2024

Seems like your test bench uses mostly integers. asn1-codecs crate seems to use i128 internally for Integer type while rasn uses BigInts. I would guess that this plays big factor at least.

I think there was a plan to make inner implementation of integer types as feature, so that it would be possible to select different internal type for performance reasons if very big numbers are not required.

@XAMPPRocky
Copy link
Collaborator

Yeah, that's still a todo, I should write up an issue giving details in case someone else who has more time is interested.

@Nicceboy
Copy link
Contributor

Yeah, that's still a todo, I should write up an issue giving details in case someone else who has more time is interested.

I think this is rather important optimisation in general and should not take too much work to implement, if we have, for example, just the i128 and BigInts as initial options. If you have time to write what you had on your mind, maybe I can try to do it.

@Nicceboy
Copy link
Contributor

I have been reworking integer type (by using primitives (i128) by default, and switching to larger ones on overflows, or if big one is created manually) Hopefully I can open draft PR in the end of week. The resulting integer will be enum, and the type of the big integers does not matter that much anymore.

Not sure if it is the best approach, but it is the one I chose after trying quite many different things. Let's leave those comments for the PR. It can be completely changed still.

Seems like integers are not the only problem with UPER. The extensive use of new vector buffers and moving this data contributes more. Default low capacity's in vectors, a lot single pushes and overall creation of new buffers instead of sharing pointer of one or reusing existing allocations slows down quite much.

Some initial differences on M2 Pro from the integer change:

UPER:

image

COER (The difference was much more impactful)

image

I have also made initial rework for optimising allocations in COER (the results below are based on the int remake) Maybe UPER will follow if I have time.

image

So by changing the integer type and reducing allocations, it was already possible to get 3x speedup at least for COER, based on the benchmark base of @dudycz .

Allocations could be improved further but I am having painful issues with lifetimes.

@repnop
Copy link
Contributor

repnop commented Jun 14, 2024

@dudycz since #256 got merged a little while ago, are you able to rerun your benchmark, at least for the decoding side of things? it should be quite a bit better now.

@dudycz
Copy link
Author

dudycz commented Jun 14, 2024

Yes, I can confirm big improvement in decoding time (on my "bench" setup I observed decrease from 457us to 341us!). Good job!

@dudycz
Copy link
Author

dudycz commented Aug 27, 2024

Meanwhile I had updated my benchmark with third uper codec - asn1rs.

Codec Encoding (µs) Decoding (µs)
rasn 1759 259
asn1-codecs 187 122
asn1rs 72 65

@XAMPPRocky
Copy link
Collaborator

Separately I think we should add continuous profiling to the CI and I've created an issue for that. Since this issue doesn't have a specific goal/end, I'm going to move this to a discussion.

#302

@librasn librasn locked and limited conversation to collaborators Aug 28, 2024
@XAMPPRocky XAMPPRocky converted this issue into discussion #303 Aug 28, 2024

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants