Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed: R2dbcTimestampOffsetProjectionSpec #895

Open
johanandren opened this issue May 12, 2023 · 6 comments
Open

Failed: R2dbcTimestampOffsetProjectionSpec #895

johanandren opened this issue May 12, 2023 · 6 comments
Labels

Comments

@johanandren
Copy link
Member

https://github.com/akka/akka-projection/actions/runs/4960317517/jobs/8875638925?pr=894#step:7:733

Full logs:
logs_7046.zip

@johanandren
Copy link
Member Author

H2: https://github.com/akka/akka-projection/actions/runs/5374660220/jobs/9750279843#step:6:834

[info] - must be able to skip envelopes but still store offset *** FAILED *** (10 seconds, 102 milliseconds)
[info]   The code passed to eventually never returned normally. Attempted 106 times over 10.053910014000001 seconds. Last failure message: None was not equal to Some(TimestampOffset(2023-06-26T06:44:20.923647Z, 1970-01-01T00:00:00Z, Map("b92dec93-793f-42cc-9537-ea79f6ace0f6" -> 6))). (R2dbcTimestampOffsetProjectionSpec.scala:1030)
[info]   org.scalatest.exceptions.TestFailedDueToTimeoutException:
[info]   at org.scalatest.enablers.Retrying$$anon$4.tryTryAgain$2(Retrying.scala:219)
[info]   at org.scalatest.enablers.Retrying$$anon$4.retry(Retrying.scala:226)
[info]   at org.scalatest.concurrent.Eventually.eventually(Eventually.scala:415)
[info]   at org.scalatest.concurrent.Eventually.eventually$(Eventually.scala:414)
[info]   at akka.actor.testkit.typed.scaladsl.ScalaTestWithActorTestKit.eventually(ScalaTestWithActorTestKit.scala:31)
[info]   at akka.projection.r2dbc.R2dbcTimestampOffsetProjectionSpec.$anonfun$new$97(R2dbcTimestampOffsetProjectionSpec.scala:1030)

@johanandren
Copy link
Member Author

@johanandren
Copy link
Member Author

@pvlugter
Copy link
Contributor

@xeppaka
Copy link

xeppaka commented Jan 10, 2024

Hi, we recently faced a problem with long DB queries and I think it's related to that issue.
We have an app with small amount of entities (which results in small amount of slices) in the database. When projection makes a request for a range of slices where no entity exist yet, it uses db_timestamp >= '1970-01-01 00:00:00+00'::timestamp with time zone in a query and offset is not saved after that query. When other slice ranges becomes bigger in terms of saved events the query becomes longer because index is not used (I'm not sure why and details about that). Here is the report from Postgres for long query (8034.857 ms):

2024-01-09 13:34:58 UTC:10.0.6.239(40998):accounts_app1@accounts:[4133]:LOG:  duration: 8034.857 ms  plan:
	Query Text: SELECT slice, persistence_id, seq_nr, db_timestamp, CURRENT_TIMESTAMP AS read_db_timestamp, tags, event_ser_id, event_ser_manifest, event_payload, meta_ser_id, meta_ser_manifest, meta_payload FROM event_journal WHERE entity_type = $1 AND slice in (128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238,239,240,241,242,243,244,245,246,247,248,249,250,251,252,253,254,255) AND db_timestamp >= $2  AND db_timestamp < CURRENT_TIMESTAMP - interval '100 milliseconds' AND deleted = false ORDER BY db_timestamp, seq_nr LIMIT $3
	Limit  (cost=177909.06..178025.74 rows=1000 width=1008)
	  ->  Gather Merge  (cost=177909.06..193820.47 rows=136374 width=1008)
	        Workers Planned: 2
	        ->  Sort  (cost=176909.04..177079.51 rows=68187 width=1008)
	              Sort Key: db_timestamp, seq_nr
	              ->  Parallel Seq Scan on event_journal  (cost=0.32..173170.42 rows=68187 width=1008)
	                    Filter: ((NOT deleted) AND (db_timestamp >= '1970-01-01 00:00:00+00'::timestamp with time zone) AND ((entity_type)::text = 'TransactionCore'::text) AND (slice = ANY ('{128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238,239,240,241,242,243,244,245,246,247,248,249,250,251,252,253,254,255}'::integer[])) AND (db_timestamp < (CURRENT_TIMESTAMP - '00:00:00.1'::interval)))

For the query above no offset were saved and no entities exist in the slice range 128 to 255.

@johanandren
Copy link
Member Author

FYI I've broken that out into a separate issue @xeppaka

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants