Replies: 1 comment
-
I came up with the solution along these lines: t1 = ibis.examples.penguins.fetch()
t2 = con.create_table("penguins", schema=t1.schema(), overwrite=True)
for b in t1.to_pyarrow_batches(chunk_size=10):
con.insert("penguins", b) Is there a more convenient way to achieve it? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi all,
I would like to read a large table from bigquery and write it to a local duckdb table, without materializing the entire thing in memory as an intermediate step. I can't directly pass the
ir.Table
toduckdb_con.create_table
, it seems like it tries to execute the expression inside the duckdb context. Most examples I've seen callto_pyarrow
on the table, but I assume that loads the entire table in memory. I also triedto_pyarrow_batches()
and gotWhat is the right way to do it?
Thanks!
Related: #11683, #4800, #8115.
Beta Was this translation helpful? Give feedback.
All reactions