-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix clippy warning about loop that never executes #133
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -86,23 +86,25 @@ pub fn normal_mode_handler(app: &mut App, key: KeyEvent) { | |
match c.do_get(ticket.into_request()).await { | ||
Ok(mut stream) => { | ||
let mut batches: Vec<RecordBatch> = Vec::new(); | ||
while let Some(maybe_batch) = stream.next().await { | ||
// temporarily only show the first batch to avoid | ||
// buffering massive result sets. Eventually there should | ||
// be some sort of paging logic | ||
// see https://github.com/datafusion-contrib/datafusion-tui/pull/133#discussion_r1756680874 | ||
// while let Some(maybe_batch) = stream.next().await { | ||
if let Some(maybe_batch) = stream.next().await { | ||
match maybe_batch { | ||
Ok(batch) => { | ||
info!("Batch rows: {}", batch.num_rows()); | ||
batches.push(batch); | ||
break; | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. the complaint is that this break means the outer loop stops I think the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. it actually was the intent but only as a short term hack to prevent queries with large number of results from taking a long time to return result. i still need to implement auto pagination logic. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. since this is flightsql and no dataframe to work with i had in mind auto appending There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I suggest:
I don't think there is any reason to use There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I thought i had gotten the LIMIT / OFFSET idea from dbeaver but I just double checked and im not seeing it there so yes i agree it doesnt make sense. My original idea was to just paginate through the stream which I believe is same as your second point so I think were aligned on approach here. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. related dicussion #137 |
||
} | ||
Err(e) => { | ||
error!("Error getting batch: {:?}", e); | ||
let elapsed = start.elapsed(); | ||
query.set_error(Some(e.to_string())); | ||
query.set_execution_time(elapsed); | ||
break; | ||
} | ||
} | ||
} | ||
|
||
let elapsed = start.elapsed(); | ||
let rows: usize = | ||
batches.iter().map(|r| r.num_rows()).sum(); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This makes clippy happy by being explicit that only the first batch is needed