Cope with missing columns converting from json to df #3
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Hi,
Thanks for writing this package, it is really useful!
I was using it and found that in some cases where not all of the fields are present in every row of the data, the converter chokes as not all of the rows are the same length. e.g. try this:
To deal with this I added a function
content_to_df
, which is called fromgetContentAsDataFrame
and looks like:If the rows are all the same length, it converts as before, but if they are not, it assumes that the first longest length row has all of the correct column names in it and adds empty fields to any rows that do not have all of them in, then it sorts the columns back to the original order and returns a dataframe from this as before.
I added a test for this condition to the test file as well.
I also changed the json parser to
jsonlite
(I think you have already done this in the CRAN version).If you can bring these changes into the CRAN version as well I would be really grateful as I want to call it in in a package I am developing now.
Cheers,
David