-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Heroku database at capacity #51
Comments
Also relevant to this and why it's important: |
What would be the price with crunchy data @wgwz ? |
For 512GB (double our current), and similar CPU/mem characteristics to our current heroku database, $191/month |
How complex would a migration be? |
Crunchydata has docs for migrating to them from heroku postgres: There's the pg_dump/pg_restore migration method. Either way, there would need to be downtime for the indexer itself, probably several hours. It's potentially a bit tricky but it doesn't look too bad. |
TODO: confirm that crunchydata has backups available |
Hey team! Please add your planning poker estimate with Zenhub @blushi @wgwz |
I spoke with the crunchydata customer service team today and they answered some of our questions:
I also asked them these two questions, which they'll get back to me on:
i also got a brief tour of the UI and it has some nice features.
|
I got replies from the cruncydata team on these items:
Standard caveats apply for replica-method. It could be 5 minutes if they don't have any indexes that need rebuilding. pg_dump is variable due to network consideration so it's hard to estimate but as a total guess I would have them plan for at least an hour, more or (maybe) less depending on the type of instance they select on Crunchy
Yes, but that's entirely on the customer. If they have writes sneaking in they will be lost. |
Upon deployment for us we need to update:
|
@blushi i forgot to add a vote yesterday, i added my vote but it's a bit of cheat so it's already done now :-) |
The production indexer database is at capacity, we're on standard-2:
https://devcenter.heroku.com/articles/heroku-postgres-plans#standard-tier
Standard-2 price is $200/month for 256 GB max storage:
https://elements.heroku.com/addons/heroku-postgresql#standard-2
The next bump in the standard plan is standard-3, $400/month for 512 GB max storage:
https://elements.heroku.com/addons/heroku-postgresql#standard-3
This task is to research alternative options that may be cheaper as the database grows.
One discussed previously is AWS RDS, but there are other options as well.
I.e. https://www.crunchydata.com/products/crunchy-bridge (among many others)
crunchydata makes db management easier than aws, but still offers a competitive price point.
We need to decide where the database goes, how we migrate the database.
I.e. do we manually stop the indexer process, and do a manual migration to the new db using pgdump and pgrestore? and restart the indexer pointed at the new database once it's complete?
Alternatively, this task could choose the cost, and just bump to the $400/month plan.
Alternatively or additionally, we could look to filter out some data the indexer is capturing.
I.e. there's some kind of network health check ping that we are storing, and there's a lot of that and i doubt we care about indexing those.
The text was updated successfully, but these errors were encountered: