You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thus we need to add it's own icon copy to each identity even if it is the same icon (some generic business logo for example).
Our 33 GB identity DB grew to 904 GB after adding 1000000 copies of the same 63 KiB icon, which is not practical. Instead it could have been only 63 KiB + HE overhead.
Better solution may be the following: add image_id field to CallIdentity message which will be set to the corresponding row's keyword in icons usecase DB (net.example.lookup.icon for example).
This would enable backend to keep only one copy of the same image. Current solution simply not feasible for any large dataset (904 GB of RAM is just too many).
The text was updated successfully, but these errors were encountered:
There is a subtle security issue with using the result of the identity lookup (the image_id field) as a keyword for another lookup.
If one were to do this, the system would be insecure, because the server can alter the calculation of the "image_id" field and getting back a signal how that field was decrypted ( or not) leaks information back to the server.
However, we might look into other ways how to address the case of repeated images.
Currently each identity keeps it's icon embedded in CallIdentity proto message.
Thus we need to add it's own icon copy to each identity even if it is the same icon (some generic business logo for example).
Our 33 GB identity DB grew to 904 GB after adding 1000000 copies of the same 63 KiB icon, which is not practical. Instead it could have been only 63 KiB + HE overhead.
Better solution may be the following: add image_id field to CallIdentity message which will be set to the corresponding row's keyword in icons usecase DB (net.example.lookup.icon for example).
This would enable backend to keep only one copy of the same image. Current solution simply not feasible for any large dataset (904 GB of RAM is just too many).
The text was updated successfully, but these errors were encountered: