Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DAG-JSON: decodes large negative integers as floats #187

Open
vmx opened this issue Sep 7, 2023 · 0 comments
Open

DAG-JSON: decodes large negative integers as floats #187

vmx opened this issue Sep 7, 2023 · 0 comments

Comments

@vmx
Copy link
Member

vmx commented Sep 7, 2023

Current DAG-JSON cannot decode large negative integers as floats. If they are outside the i64 range, they get automatically converted into a float. This means that decoding a number like -11959030306112471732 becomes Float(-1.1959030306112471e19) instead of the expected Integer(-11959030306112471732).

Here's an example test case:

#[test]
fn decode_large_negative_integer() {
    let integer: i128 = -11959030306112471732;
    let decoded: Ipld = DagJsonCodec.decode(integer.to_string().as_bytes()).unwrap();
    assert_eq!(decoded, Ipld::Integer(integer));
}

One way is to indeed decode it as integer, another way (which is likely easier) is to have an error if it's outside the i64 range. I think such an error would be better then implicit conversion.

Though I should add that if someone spends time on the current DAG-JSON implementation, that it might be better spend on a new DAG-JSON implementation similar to https://github.com/ipld/serde_ipld_dagcbor.

This means that I'm not sure if this issue ever gets fixed in the current implementation. I just wanted to make sure this issue is tracked somewhere in case someone encounters it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant