You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
as you know I'm finally moving matterbridge from the old api to the new api.
I'm opening this issue not only on behalf of Matterbridge but also on behalf of the Shelly group. The matter discussed here is equally relevant to the Shelly group and its use cases.
We would appreciate your insights and support in addressing this matter.
Thank you in advance for your attention.
During the test on a low powered device (Radxa rock-s0 that has 512MB of ram) where with the old api was possible to bridge without issues even more then 100 devices, we discovered that the new api produce two kind of issues:
memory issues:
Each device now needs approximately 2-3 MB of RAM in the subscription phase. Before, with the old api, the memory requirements were below 1MB and there were absolutely no spikes in memory allocation during the whole process.
performance issue when the controller send the subscription request:
The part where in the log you see hundred times the "Read attribute..." is so slow that the subscription fails cause the controller sends a new request before it receives the data of the previous request (Apollon can describe this much better then me) and so the subscription fails in a infinite loop.
The new API is currently using some legacy-abstraction layer that we need to maintain until we can remove the Legacy API completely. Additionally we are already aware of a memory optimization idea when it comes to big wildcard subscriptions and also some possible optimizations because of the more dynamic nature of the new API.
Basically this tied into these Tasks already planned:
Hi,
as you know I'm finally moving matterbridge from the old api to the new api.
I'm opening this issue not only on behalf of Matterbridge but also on behalf of the Shelly group. The matter discussed here is equally relevant to the Shelly group and its use cases.
We would appreciate your insights and support in addressing this matter.
Thank you in advance for your attention.
During the test on a low powered device (Radxa rock-s0 that has 512MB of ram) where with the old api was possible to bridge without issues even more then 100 devices, we discovered that the new api produce two kind of issues:
memory issues:
Each device now needs approximately 2-3 MB of RAM in the subscription phase. Before, with the old api, the memory requirements were below 1MB and there were absolutely no spikes in memory allocation during the whole process.
performance issue when the controller send the subscription request:
The part where in the log you see hundred times the "Read attribute..." is so slow that the subscription fails cause the controller sends a new request before it receives the data of the previous request (Apollon can describe this much better then me) and so the subscription fails in a infinite loop.
Heap data after the controller is connected but before the subscription starts (36 devices loaded).
"memoryUsage": {
"rss": "170.39 MB",
"heapTotal": "98.50 MB",
"heapUsed": "87.05 MB",
"external": "3.53 MB",
"arrayBuffers": "108.84 KB"
},
"heapStats": {
"total_heap_size": "98.50 MB",
"total_heap_size_executable": "3.91 MB",
"total_physical_size": "98.13 MB",
"total_available_size": "426.85 MB",
"used_heap_size": "87.06 MB",
"heap_size_limit": "515.00 MB",
"malloced_memory": "1.02 MB",
"peak_malloced_memory": "8.75 MB",
"does_zap_garbage": "0.00 KB",
"number_of_native_contexts": "0.00 KB",
"number_of_detached_contexts": "0.00 KB",
"total_global_handles_size": "264.00 KB",
"used_global_handles_size": "239.63 KB",
"external_memory": "3.53 MB"
},
Immediately after the subscription:
"memoryUsage": {
"rss": "228.40 MB",
"heapTotal": "150.25 MB",
"heapUsed": "123.08 MB",
"external": "3.55 MB",
"arrayBuffers": "125.59 KB"
},
"heapStats": {
"total_heap_size": "150.25 MB",
"total_heap_size_executable": "4.91 MB",
"total_physical_size": "150.33 MB",
"total_available_size": "389.73 MB",
"used_heap_size": "123.09 MB",
"heap_size_limit": "515.00 MB",
"malloced_memory": "1.02 MB",
"peak_malloced_memory": "10.38 MB",
"does_zap_garbage": "0.00 KB",
"number_of_native_contexts": "0.00 KB",
"number_of_detached_contexts": "0.00 KB",
"total_global_handles_size": "264.00 KB",
"used_global_handles_size": "241.22 KB",
},
after 2 minutes from the subscription and stable with this data in the time:
"memoryUsage": {
"rss": "181.45 MB",
"heapTotal": "137.25 MB",
"heapUsed": "93.32 MB",
"external": "3.54 MB",
"arrayBuffers": "117.47 KB"
},
"heapStats": {
"total_heap_size": "137.25 MB",
"total_heap_size_executable": "5.16 MB",
"total_physical_size": "104.25 MB",
"total_available_size": "419.80 MB",
"used_heap_size": "93.33 MB",
"heap_size_limit": "515.00 MB",
"malloced_memory": "1.02 MB",
"peak_malloced_memory": "10.72 MB",
"does_zap_garbage": "0.00 KB",
"number_of_native_contexts": "0.00 KB",
"number_of_detached_contexts": "0.00 KB",
"total_global_handles_size": "264.00 KB",
"used_global_handles_size": "241.09 KB",
"external_memory": "3.54 MB"
}
A few screenshots:
New api paired and connected with 30 devices 2.0.0
New api paired and connected with 30 devices after 2 minutes
New api paired and connected with 30 devices and subscription failing
To compare with old api with 30 devices:
Old api paired and connected with 30 devices (no spikes on load)
Old api paired and connected with 30 devices after a few minutes (no spikes on load)
The text was updated successfully, but these errors were encountered: