Counting with len is slow when slicing.
#355
Replies: 1 comment
-
|
Hi @itrobinson This gets a tdms['group_name']['channel_name']Getting the length is fast because it's stored in the metadata: Lines 521 to 524 in 6cb5a80 Any time you index into the channel with a slice, however, data is read from the channel and returned as a numpy array. This should be faster for smaller slices that read less data, but will always be a lot slower than getting the length of the channel. You should be able to easily compute the length of a slice if you know the channel length. You could leverage Python's built-in Eg: channel_length = len(tdms['group_name']['channel_name'])
slice_length = len(range(channel_length)[0:-1:2]) |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I am counting the number of values in a channel in a large file.
This statement is very fast
but these equivalent statements are very slow
Similarly, these statements will all be very slow
(In the above
tdmsis anptdms.TdmsFileobject.)Is there any feature built into npTDMS that can quickly count slices on a channel?
Why does this matter? I often test a script by running it on a small regularly-spaced subset of a large data file. Slicing is the easiest way to do this. Then I remove the slice and leave the script running, sometimes for hours, on the full data file.
Beta Was this translation helpful? Give feedback.
All reactions