-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MAX_MOSAICS_EXCEEDED #44
Comments
have you checked number of mosaics of ALL accounts? |
Is there any way to "clean up" tokens, particularly expired ones? There are only two valid tokens in these accounts, the rest are expired but they are still counting towards the limit. We thought that expiry would be a good way to avoid having to have regular burn transactions but maybe that is not the case. I haven't checked all accounts yet. |
Can you connect to the mongodb and type the below commands in the mongo client to show all the mosaics? use catapult I am able to create over 60k mosaics. Trying to see what I am doing differently.
|
Hi, I ran the above and we reset the node yesterday so after running for approximately one day, we only have the following:
And @amplemeter can confirm but I think we only ran for 7 days last time when we got the error? |
Yes |
Did you start the chain from scratch after upgrading or continued with the old blocks? Also, if the disk space usage balloons in a short period of time, it's usually due to the auditing option enabled. |
After the first time the problem happened, I ran ./cmds/clean-all, then modified the config I mentioned above. After running for about a week, we got the same issue. I ran ./cmds/clear-data and we are now running again, I assume for about another week until we run into this error and have to reset again. |
So what should I do when it happens again? I will have to wipe and restart quickly to minimize disruption. What logs or info should I collect? |
Yes, thanks. This one I found. I still have the max_mosaic error to deal with though. |
Can't help you there. We're still running 0.1.0.2 in our production chains. |
And it's happened again. I will pull a fresh copy from the repo and change just that one option and run it. This shouldn't be the only way to fix such a problem. It means we have to reconfigure a lot of stuff. |
What option are you planning to change? Can you provide mosaic count from mongodb before you reset? use catapult |
It was 1000, the default setting. The option i changed is maxMosaicsPerAccount |
I have the default settings of 1000. I am able to create a mix of mosaic(expiring and non-expiring). How long is the duration on these mosaics?
|
The mosaic duration was 20 blocks or so. I downloaded a fresh version of 0.9.5.1, changed the parameter only and ran start-all. Within a week it failed again. Mosaic count was about 1100. We are have changed our system now so we don't have to deal with this anymore. |
After pulling catapult-service-bootstrap v0.9.5.1, I modified the file ~/catapult-service-bootstrap/common/ruby/catapult-templates/peer_node/resources/config-network.properties.mt to change maxMosaicsPerAccount to 65'000. Afterwards, I check the generated ~/catapult-service-bootstrap/build/catapult-config/peer-node-1/userconfig/resources/config-network.properties file to confirm and yes, the value of maxMosaicsPerAccount is 65'000.
I then start running my algorithm which generates a new mosaic every 10 blocks. After a few days, our system begins to fail. Running a simple transaction through the cli and then checking the hash yields a 404 error. At the same time, our disk usage on the service-bootstrap node cluster's server suddenly begins to balloon, increasing 20% in a day and filling the disk to capacity. When searching for logs, multiple failures on the api-node-0_1 are found and attributed to MAX_MOSAICS_EXCEEDED. When I counted mosaics in some of these accounts they were less than 200.
This is the second time we have had such a failure. The last time, we didn't notice it until the disk was completely full. We found the log, increased the maxMosaicsPerAccount configuration but it has not solved the problem.
The text was updated successfully, but these errors were encountered: